text,target " In these notes we formally describe the functionality of Calculating Valid Domains from the BDD representing the solution space of valid configurations. The formalization is largely based on the CLab configuration framework. ",Calculating Valid Domains for BDD-Based Interactive Configuration " Motivation: Profile hidden Markov Models (pHMMs) are a popular and very useful tool in the detection of the remote homologue protein families. Unfortunately, their performance is not always satisfactory when proteins are in the 'twilight zone'. We present HMMER-STRUCT, a model construction algorithm and tool that tries to improve pHMM performance by using structural information while training pHMMs. As a first step, HMMER-STRUCT constructs a set of pHMMs. Each pHMM is constructed by weighting each residue in an aligned protein according to a specific structural property of the residue. Properties used were primary, secondary and tertiary structures, accessibility and packing. HMMER-STRUCT then prioritizes the results by voting. Results: We used the SCOP database to perform our experiments. Throughout, we apply leave-one-family-out cross-validation over protein superfamilies. First, we used the MAMMOTH-mult structural aligner to align the training set proteins. Then, we performed two sets of experiments. In a first experiment, we compared structure weighted models against standard pHMMs and against each other. In a second experiment, we compared the voting model against individual pHMMs. We compare method performance through ROC curves and through Precision/Recall curves, and assess significance through the paired two tailed t-test. Our results show significant performance improvements of all structurally weighted models over default HMMER, and a significant improvement in sensitivity of the combined models over both the original model and the structurally weighted models. ",A study of structural properties on profiles HMMs " This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV. ",Bayesian approach to rough set " Noise, corruptions and variations in face images can seriously hurt the performance of face recognition systems. To make such systems robust, multiclass neuralnetwork classifiers capable of learning from noisy data have been suggested. However on large face data sets such systems cannot provide the robustness at a high level. In this paper we explore a pairwise neural-network system as an alternative approach to improving the robustness of face recognition. In our experiments this approach is shown to outperform the multiclass neural-network system in terms of the predictive accuracy on the face images corrupted by noise. ","Comparing Robustness of Pairwise and Multiclass Neural-Network Systems for Face Recognition" " Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-line) or incrementally along evolution (On-line). Experiments on a set of benchmark problems show that Off-line outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier ensembles. ",Ensemble Learning for Free with Evolutionary Algorithms ? " Gaussian mixture models (GMM) and support vector machines (SVM) are introduced to classify faults in a population of cylindrical shells. The proposed procedures are tested on a population of 20 cylindrical shells and their performance is compared to the procedure, which uses multi-layer perceptrons (MLP). The modal properties extracted from vibration data are used to train the GMM, SVM and MLP. It is observed that the GMM produces 98%, SVM produces 94% classification accuracy while the MLP produces 88% classification rates. ","Fault Classification in Cylinders Using Multilayer Perceptrons, Support Vector Machines and Guassian Mixture Models" " The act of bluffing confounds game designers to this day. The very nature of bluffing is even open for debate, adding further complication to the process of creating intelligent virtual players that can bluff, and hence play, realistically. Through the use of intelligent, learning agents, and carefully designed agent outlooks, an agent can in fact learn to predict its opponents reactions based not only on its own cards, but on the actions of those around it. With this wider scope of understanding, an agent can in learn to bluff its opponents, with the action representing not an illogical action, as bluffing is often viewed, but rather as an act of maximising returns through an effective statistical optimisation. By using a tee dee lambda learning algorithm to continuously adapt neural network agent intelligence, agents have been shown to be able to learn to bluff without outside prompting, and even to learn to call each others bluffs in free, competitive play. ",Learning to Bluff " The semiring-based constraint satisfaction problems (semiring CSPs), proposed by Bistarelli, Montanari and Rossi \cite{BMR97}, is a very general framework of soft constraints. In this paper we propose an abstraction scheme for soft constraints that uses semiring homomorphism. To find optimal solutions of the concrete problem, the idea is, first working in the abstract problem and finding its optimal solutions, then using them to solve the concrete problem. In particular, we show that a mapping preserves optimal solutions if and only if it is an order-reflecting semiring homomorphism. Moreover, for a semiring homomorphism $\alpha$ and a problem $P$ over $S$, if $t$ is optimal in $\alpha(P)$, then there is an optimal solution $\bar{t}$ of $P$ such that $\bar{t}$ has the same value as $t$ in $\alpha(P)$. ",Soft constraint abstraction based on semiring homomorphism " This paper proposes a neuro-rough model based on multi-layered perceptron and rough set. The neuro-rough model is then tested on modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62%. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model. ",Bayesian Approach to Neuro-Rough Models " Water plays a pivotal role in many physical processes, and most importantly in sustaining human life, animal life and plant life. Water supply entities therefore have the responsibility to supply clean and safe water at the rate required by the consumer. It is therefore necessary to implement mechanisms and systems that can be employed to predict both short-term and long-term water demands. The increasingly growing field of computational intelligence techniques has been proposed as an efficient tool in the modelling of dynamic phenomena. The primary objective of this paper is to compare the efficiency of two computational intelligence techniques in water demand forecasting. The techniques under comparison are the Artificial Neural Networks (ANNs) and the Support Vector Machines (SVMs). In this study it was observed that the ANNs perform better than the SVMs. This performance is measured against the generalisation ability of the two. ","Artificial Neural Networks and Support Vector Machines for Water Demand Time Series Forecasting" " An ensemble based approach for dealing with missing data, without predicting or imputing the missing values is proposed. This technique is suitable for online operations of neural networks and as a result, is used for online condition monitoring. The proposed technique is tested in both classification and regression problems. An ensemble of Fuzzy-ARTMAPs is used for classification whereas an ensemble of multi-layer perceptrons is used for the regression problem. Results obtained using this ensemble-based technique are compared to those obtained using a combination of auto-associative neural networks and genetic algorithms and findings show that this method can perform up to 9% better in regression problems. Another advantage of the proposed technique is that it eliminates the need for finding the best estimate of the data, and hence, saves time. ","Fuzzy Artmap and Neural Network Approach to Online Processing of Inputs with Missing Values" " Militarised conflict is one of the risks that have a significant impact on society. Militarised Interstate Dispute (MID) is defined as an outcome of interstate interactions, which result on either peace or conflict. Effective prediction of the possibility of conflict between states is an important decision support tool for policy makers. In a previous research, neural networks (NNs) have been implemented to predict the MID. Support Vector Machines (SVMs) have proven to be very good prediction techniques and are introduced for the prediction of MIDs in this study and compared to neural networks. The results show that SVMs predict MID better than NNs while NNs give more consistent and easy to interpret sensitivity analysis than SVMs. ",Artificial Intelligence for Conflict Management " The idea of symbolic controllers tries to bridge the gap between the top-down manual design of the controller architecture, as advocated in Brooks' subsumption architecture, and the bottom-up designer-free approach that is now standard within the Evolutionary Robotics community. The designer provides a set of elementary behavior, and evolution is given the goal of assembling them to solve complex tasks. Two experiments are presented, demonstrating the efficiency and showing the recursiveness of this approach. In particular, the sensitivity with respect to the proposed elementary behaviors, and the robustness w.r.t. generalization of the resulting controllers are studied in detail. ",Evolving Symbolic Controllers " This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange ""chemicals"" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics. ",Robust Multi-Cellular Developmental Design " This paper uses Artificial Neural Network (ANN) models to compute response of structural system subject to Indian earthquakes at Chamoli and Uttarkashi ground motion data. The system is first trained for a single real earthquake data. The trained ANN architecture is then used to simulate earthquakes with various intensities and it was found that the predicted responses given by ANN model are accurate for practical purposes. When the ANN is trained by a part of the ground motion data, it can also identify the responses of the structural system well. In this way the safeness of the structural systems may be predicted in case of future earthquakes without waiting for the earthquake to occur for the lessons. Time period and the corresponding maximum response of the building for an earthquake has been evaluated, which is again trained to predict the maximum response of the building at different time periods. The trained time period versus maximum response ANN model is also tested for real earthquake data of other place, which was not used in the training and was found to be in good agreement. ","Response Prediction of Structural System Subject to Earthquake Motions using Artificial Neural Network" " This paper presents a fault classification method which makes use of a Takagi-Sugeno neuro-fuzzy model and Pseudomodal energies calculated from the vibration signals of cylindrical shells. The calculation of Pseudomodal Energies, for the purposes of condition monitoring, has previously been found to be an accurate method of extracting features from vibration signals. This calculation is therefore used to extract features from vibration signals obtained from a diverse population of cylindrical shells. Some of the cylinders in the population have faults in different substructures. The pseudomodal energies calculated from the vibration signals are then used as inputs to a neuro-fuzzy model. A leave-one-out cross-validation process is used to test the performance of the model. It is found that the neuro-fuzzy model is able to classify faults with an accuracy of 91.62%, which is higher than the previously used multilayer perceptron. ","Fault Classification using Pseudomodal Energies and Neuro-fuzzy modelling" " This paper presents bushing condition monitoring frameworks that use multi-layer perceptrons (MLP), radial basis functions (RBF) and support vector machines (SVM) classifiers. The first level of the framework determines if the bushing is faulty or not while the second level determines the type of fault. The diagnostic gases in the bushings are analyzed using the dissolve gas analysis. MLP gives superior performance in terms of accuracy and training time than SVM and RBF. In addition, an on-line bushing condition monitoring approach, which is able to adapt to newly acquired data are introduced. This approach is able to accommodate new classes that are introduced by incoming data and is implemented using an incremental learning algorithm that uses MLP. The testing results improved from 67.5% to 95.8% as new data were introduced and the testing results improved from 60% to 95.3% as new conditions were introduced. On average the confidence value of the framework on its decision was 0.92. ",On-Line Condition Monitoring using Computational Intelligence " This paper overviews the basic principles and recent advances in the emerging field of Quantum Computation (QC), highlighting its potential application to Artificial Intelligence (AI). The paper provides a very brief introduction to basic QC issues like quantum registers, quantum gates and quantum algorithms and then it presents references, ideas and research guidelines on how QC can be used to deal with some basic AI problems, such as search and pattern matching, as soon as quantum computers become widely available. ",The Road to Quantum Artificial Intelligence " Cluster matching by permuting cluster labels is important in many clustering contexts such as cluster validation and cluster ensemble techniques. The classic approach is to minimize the euclidean distance between two cluster solutions which induces inappropriate stability in certain settings. Therefore, we present the truematch algorithm that introduces two improvements best explained in the crisp case. First, instead of maximizing the trace of the cluster crosstable, we propose to maximize a chi-square transformation of this crosstable. Thus, the trace will not be dominated by the cells with the largest counts but by the cells with the most non-random observations, taking into account the marginals. Second, we suggest a probabilistic component in order to break ties and to make the matching algorithm truly random on random data. The truematch algorithm is designed as a building block of the truecluster framework and scales in polynomial time. First simulation results confirm that the truematch algorithm gives more consistent truecluster results for unequal cluster sizes. Free R software is available. ",Truecluster matching " Semantic network research has seen a resurgence from its early history in the cognitive sciences with the inception of the Semantic Web initiative. The Semantic Web effort has brought forth an array of technologies that support the encoding, storage, and querying of the semantic network data structure at the world stage. Currently, the popular conception of the Semantic Web is that of a data modeling medium where real and conceptual entities are related in semantically meaningful ways. However, new models have emerged that explicitly encode procedural information within the semantic network substrate. With these new technologies, the Semantic Web has evolved from a data modeling medium to a computational medium. This article provides a classification of existing computational modeling efforts and the requirements of supporting technologies that will aid in the further growth of this burgeoning domain. ",Modeling Computations in a Semantic Network " This paper describes a system capable of semi-automatically filling an XML template from free texts in the clinical domain (practice guidelines). The XML template includes semantic information not explicitly encoded in the text (pairs of conditions and actions/recommendations). Therefore, there is a need to compute the exact scope of conditions over text sequences expressing the required actions. We present a system developed for this task. We show that it yields good performance when applied to the analysis of French practice guidelines. ",Automatically Restructuring Practice Guidelines using the GEM DTD " Representing and reasoning about qualitative temporal information is an essential part of many artificial intelligence tasks. Lots of models have been proposed in the litterature for representing such temporal information. All derive from a point-based or an interval-based framework. One fundamental reasoning task that arises in applications of these frameworks is given by the following scheme: given possibly indefinite and incomplete knowledge of the binary relationships between some temporal objects, find the consistent scenarii between all these objects. All these models require transitive tables -- or similarly inference rules-- for solving such tasks. We have defined an alternative model, S-languages - to represent qualitative temporal information, based on the only two relations of \emph{precedence} and \emph{simultaneity}. In this paper, we show how this model enables to avoid transitive tables or inference rules to handle this kind of problem. ",Temporal Reasoning without Transitive Tables " This paper is a survey of a large number of informal definitions of ``intelligence'' that the authors have collected over the years. Naturally, compiling a complete list would be impossible as many definitions of intelligence are buried deep inside articles and books. Nevertheless, the 70-odd definitions presented here are, to the authors' knowledge, the largest and most well referenced collection there is. ",A Collection of Definitions of Intelligence " Web semantic access in specific domains calls for specialized search engines with enhanced semantic querying and indexing capacities, which pertain both to information retrieval (IR) and to information extraction (IE). A rich linguistic analysis is required either to identify the relevant semantic units to index and weight them according to linguistic specific statistical distribution, or as the basis of an information extraction process. Recent developments make Natural Language Processing (NLP) techniques reliable enough to process large collections of documents and to enrich them with semantic annotations. This paper focuses on the design and the development of a text processing platform, Ogmios, which has been developed in the ALVIS project. The Ogmios platform exploits existing NLP modules and resources, which may be tuned to specific domains and produces linguistically annotated documents. We show how the three constraints of genericity, domain semantic awareness and performance can be handled all together. ","A Robust Linguistic Platform for Efficient and Domain specific Web Content Analysis" " We consider the problem of finding an n-agent joint-policy for the optimal finite-horizon control of a decentralized Pomdp (Dec-Pomdp). This is a problem of very high complexity (NEXP-hard in n >= 2). In this paper, we propose a new mathematical programming approach for the problem. Our approach is based on two ideas: First, we represent each agent's policy in the sequence-form and not in the tree-form, thereby obtaining a very compact representation of the set of joint-policies. Second, using this compact representation, we solve this problem as an instance of combinatorial optimization for which we formulate a mixed integer linear program (MILP). The optimal solution of the MILP directly yields an optimal joint-policy for the Dec-Pomdp. Computational experience shows that formulating and solving the MILP requires significantly less time to solve benchmark Dec-Pomdp problems than existing algorithms. For example, the multi-agent tiger problem for horizon 4 is solved in 72 secs with the MILP whereas existing algorithms require several hours to solve it. ","Mixed Integer Linear Programming For Exact Finite-Horizon Planning In Decentralized Pomdps" " In this paper, we employ Probabilistic Neural Network (PNN) with image and data processing techniques to implement a general purpose automated leaf recognition algorithm. 12 leaf features are extracted and orthogonalized into 5 principal variables which consist the input vector of the PNN. The PNN is trained by 1800 leaves to classify 32 kinds of plants with an accuracy greater than 90%. Compared with other approaches, our algorithm is an accurate artificial intelligence approach which is fast in execution and easy in implementation. ","A Leaf Recognition Algorithm for Plant Classification Using Probabilistic Neural Network" " When Kurt Goedel layed the foundations of theoretical computer science in 1931, he also introduced essential concepts of the theory of Artificial Intelligence (AI). Although much of subsequent AI research has focused on heuristics, which still play a major role in many practical AI applications, in the new millennium AI theory has finally become a full-fledged formal science, with important optimality results for embodied agents living in unknown environments, obtained through a combination of theory a la Goedel and probability theory. Here we look back at important milestones of AI history, mention essential recent results, and speculate about what we may expect from the next 25 years, emphasizing the significance of the ongoing dramatic hardware speedups, and discussing Goedel-inspired, self-referential, self-improving universal problem solvers. ","2006: Celebrating 75 years of AI - History and Outlook: the Next 25 Years" " In this paper we extend the new family of (quantitative) Belief Conditioning Rules (BCR) recently developed in the Dezert-Smarandache Theory (DSmT) to their qualitative counterpart for belief revision. Since the revision of quantitative as well as qualitative belief assignment given the occurrence of a new event (the conditioning constraint) can be done in many possible ways, we present here only what we consider as the most appealing Qualitative Belief Conditioning Rules (QBCR) which allow to revise the belief directly with words and linguistic labels and thus avoids the introduction of ad-hoc translations of quantitative beliefs into quantitative ones for solving the problem. ",Qualitative Belief Conditioning Rules (QBCR) " Many systems can be described in terms of networks of discrete elements and their various relationships to one another. A semantic network, or multi-relational network, is a directed labeled graph consisting of a heterogeneous set of entities connected by a heterogeneous set of relationships. Semantic networks serve as a promising general-purpose modeling substrate for complex systems. Various standardized formats and tools are now available to support practical, large-scale semantic network models. First, the Resource Description Framework (RDF) offers a standardized semantic network data model that can be further formalized by ontology modeling languages such as RDF Schema (RDFS) and the Web Ontology Language (OWL). Second, the recent introduction of highly performant triple-stores (i.e. semantic network databases) allows semantic network models on the order of $10^9$ edges to be efficiently stored and manipulated. RDF and its related technologies are currently used extensively in the domains of computer science, digital library science, and the biological sciences. This article will provide an introduction to RDF/RDFS/OWL and an examination of its suitability to model discrete element complex systems. ",Using RDF to Model the Structure and Process of Systems " This paper deals with enriched qualitative belief functions for reasoning under uncertainty and for combining information expressed in natural language through linguistic labels. In this work, two possible enrichments (quantitative and/or qualitative) of linguistic labels are considered and operators (addition, multiplication, division, etc) for dealing with them are proposed and explained. We denote them $qe$-operators, $qe$ standing for ""qualitative-enriched"" operators. These operators can be seen as a direct extension of the classical qualitative operators ($q$-operators) proposed recently in the Dezert-Smarandache Theory of plausible and paradoxist reasoning (DSmT). $q$-operators are also justified in details in this paper. The quantitative enrichment of linguistic label is a numerical supporting degree in $[0,\infty)$, while the qualitative enrichment takes its values in a finite ordered set of linguistic values. Quantitative enrichment is less precise than qualitative enrichment, but it is expected more close with what human experts can easily provide when expressing linguistic labels with supporting degrees. Two simple examples are given to show how the fusion of qualitative-enriched belief assignments can be done. ",Enrichment of Qualitative Beliefs for Reasoning under Uncertainty " We try to perform geometrization of psychology by representing mental states, <>, by points of a metric space, <>. Evolution of ideas is described by dynamical systems in metric mental space. We apply the mental space approach for modeling of flows of unconscious and conscious information in the human brain. In a series of models, Models 1-4, we consider cognitive systems with increasing complexity of psychological behavior determined by structure of flows of ideas. Since our models are in fact models of the AI-type, one immediately recognizes that they can be used for creation of AI-systems, which we call psycho-robots, exhibiting important elements of human psyche. Creation of such psycho-robots may be useful improvement of domestic robots. At the moment domestic robots are merely simple working devices (e.g. vacuum cleaners or lawn mowers) . However, in future one can expect demand in systems which be able not only perform simple work tasks, but would have elements of human self-developing psyche. Such AI-psyche could play an important role both in relations between psycho-robots and their owners as well as between psycho-robots. Since the presence of a huge numbers of psycho-complexes is an essential characteristic of human psychology, it would be interesting to model them in the AI-framework. ",Toward Psycho-robots " In this paper we study cellular automata (CAs) that perform the computational Majority task. This task is a good example of what the phenomenon of emergence in complex systems is. We take an interest in the reasons that make this particular fitness landscape a difficult one. The first goal is to study the landscape as such, and thus it is ideally independent from the actual heuristics used to search the space. However, a second goal is to understand the features a good search technique for this particular problem space should possess. We statistically quantify in various ways the degree of difficulty of searching this landscape. Due to neutrality, investigations based on sampling techniques on the whole landscape are difficult to conduct. So, we go exploring the landscape from the top. Although it has been proved that no CA can perform the task perfectly, several efficient CAs for this task have been found. Exploiting similarities between these CAs and symmetries in the landscape, we define the Olympus landscape which is regarded as the ''heavenly home'' of the best local optima known (blok). Then we measure several properties of this subspace. Although it is easier to find relevant CAs in this subspace than in the overall landscape, there are structural reasons that prevent a searcher from finding overfitted CAs in the Olympus. Finally, we study dynamics and performance of genetic algorithms on the Olympus in order to confirm our analysis and to find efficient CAs for the Majority problem with low computational cost. ","Fitness landscape of the cellular automata majority problem: View from the Olympus" " This paper introduces the concept of fitness cloud as an alternative way to visualize and analyze search spaces than given by the geographic notion of fitness landscape. It is argued that the fitness cloud concept overcomes several deficiencies of the landscape representation. Our analysis is based on the correlation between fitness of solutions and fitnesses of nearest solutions according to some neighboring. We focus on the behavior of local search heuristics, such as hill climber, on the well-known NK fitness landscape. In both cases the fitness vs. fitness correlation is shown to be related to the epistatic parameter K. ",Local search heuristics: Fitness Cloud versus Fitness Landscape " This theoretical work defines the measure of autocorrelation of evolvability in the context of neutral fitness landscape. This measure has been studied on the classical MAX-SAT problem. This work highlight a new characteristic of neutral fitness landscapes which allows to design new adapted metaheuristic. ",Measuring the Evolvability Landscape to study Neutrality " This paper describes a system capable of semi-automatically filling an XML template from free texts in the clinical domain (practice guidelines). The XML template includes semantic information not explicitly encoded in the text (pairs of conditions and actions/recommendations). Therefore, there is a need to compute the exact scope of conditions over text sequences expressing the required actions. We present in this paper the rules developed for this task. We show that the system yields good performance when applied to the analysis of French practice guidelines. ","From Texts to Structured Documents: The Case of Health Practice Guidelines" " We develop a general framework for MAP estimation in discrete and Gaussian graphical models using Lagrangian relaxation techniques. The key idea is to reformulate an intractable estimation problem as one defined on a more tractable graph, but subject to additional constraints. Relaxing these constraints gives a tractable dual problem, one defined by a thin graph, which is then optimized by an iterative procedure. When this iterative optimization leads to a consistent estimate, one which also satisfies the constraints, then it corresponds to an optimal MAP estimate of the original model. Otherwise there is a ``duality gap'', and we obtain a bound on the optimal solution. Thus, our approach combines convex optimization with dynamic programming techniques applicable for thin graphs. The popular tree-reweighted max-product (TRMP) method may be seen as solving a particular class of such relaxations, where the intractable graph is relaxed to a set of spanning trees. We also consider relaxations to a set of small induced subgraphs, thin subgraphs (e.g. loops), and a connected tree obtained by ``unwinding'' cycles. In addition, we propose a new class of multiscale relaxations that introduce ``summary'' variables. The potential benefits of such generalizations include: reducing or eliminating the ``duality gap'' in hard problems, reducing the number or Lagrange multipliers in the dual problem, and accelerating convergence of the iterative optimization procedure. ",Lagrangian Relaxation for MAP Estimation in Graphical Models " This paper addresses a method to analyze the covert social network foundation hidden behind the terrorism disaster. It is to solve a node discovery problem, which means to discover a node, which functions relevantly in a social network, but escaped from monitoring on the presence and mutual relationship of nodes. The method aims at integrating the expert investigator's prior understanding, insight on the terrorists' social network nature derived from the complex graph theory, and computational data processing. The social network responsible for the 9/11 attack in 2001 is used to execute simulation experiment to evaluate the performance of the method. ",Analyzing covert social network foundation behind terrorism disaster " Methods to solve a node discovery problem for a social network are presented. Covert nodes refer to the nodes which are not observable directly. They transmit the influence and affect the resulting collaborative activities among the persons in a social network, but do not appear in the surveillance logs which record the participants of the collaborative activities. Discovering the covert nodes is identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. The performance of the methods is demonstrated with a test dataset generated from computationally synthesized networks and a real organization. ",Node discovery problem for a social network " An empty spot refers to an empty hard-to-fill space which can be found in the records of the social interaction, and is the clue to the persons in the underlying social network who do not appear in the records. This contribution addresses a problem to predict relevant empty spots in social interaction. Homogeneous and inhomogeneous networks are studied as a model underlying the social interaction. A heuristic predictor function approach is presented as a new method to address the problem. Simulation experiment is demonstrated over a homogeneous network. A test data in the form of baskets is generated from the simulated communication. Precision to predict the empty spots is calculated to demonstrate the performance of the presented approach. ",Predicting relevant empty spots in social interaction " To appear in Theory and Practice of Logic Programming (TPLP), 2008. We are researching the interaction between the rule and the ontology layers of the Semantic Web, by comparing two options: 1) using OWL and its rule extension SWRL to develop an integrated ontology/rule language, and 2) layering rules on top of an ontology with RuleML and OWL. Toward this end, we are developing the SWORIER system, which enables efficient automated reasoning on ontologies and rules, by translating all of them into Prolog and adding a set of general rules that properly capture the semantics of OWL. We have also enabled the user to make dynamic changes on the fly, at run time. This work addresses several of the concerns expressed in previous work, such as negation, complementary classes, disjunctive heads, and cardinality, and it discusses alternative approaches for dealing with inconsistencies in the knowledge base. In addition, for efficiency, we implemented techniques called extensionalization, avoiding reanalysis, and code minimization. ","Translating OWL and Semantic Web Rules into Prolog: Moving Toward Description Logic Programs" " We consider hexagonal cellular automata with immediate cell neighbourhood and three cell-states. Every cell calculates its next state depending on the integral representation of states in its neighbourhood, i.e. how many neighbours are in each one state. We employ evolutionary algorithms to breed local transition functions that support mobile localizations (gliders), and characterize sets of the functions selected in terms of quasi-chemical systems. Analysis of the set of functions evolved allows to speculate that mobile localizations are likely to emerge in the quasi-chemical systems with limited diffusion of one reagent, a small number of molecules is required for amplification of travelling localizations, and reactions leading to stationary localizations involve relatively equal amount of quasi-chemical species. Techniques developed can be applied in cascading signals in nature-inspired spatially extended computing devices, and phenomenological studies and classification of non-linear discrete systems. ",Evolving localizations in reaction-diffusion cellular automata " We describe decomposition during search (DDS), an integration of And/Or tree search into propagation-based constraint solvers. The presented search algorithm dynamically decomposes sub-problems of a constraint satisfaction problem into independent partial problems, avoiding redundant work. The paper discusses how DDS interacts with key features that make propagation-based solvers successful: constraint propagation, especially for global constraints, and dynamic search heuristics. We have implemented DDS for the Gecode constraint programming library. Two applications, solution counting in graph coloring and protein structure prediction, exemplify the benefits of DDS in practice. ",Decomposition During Search for Propagation-Based Constraint Solvers " A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines. ",Universal Intelligence: A Definition of Machine Intelligence " Although the definition and measurement of intelligence is clearly of fundamental importance to the field of artificial intelligence, no general survey of definitions and tests of machine intelligence exists. Indeed few researchers are even aware of alternatives to the Turing test and its many derivatives. In this paper we fill this gap by providing a short survey of the many tests of machine intelligence that have been proposed. ",Tests of Machine Intelligence " We consider an agent interacting with an unknown environment. The environment is a function which maps natural numbers to natural numbers; the agent's set of hypotheses about the environment contains all such functions which are computable and compatible with a finite set of known input-output pairs, and the agent assigns a positive probability to each such hypothesis. We do not require that this probability distribution be computable, but it must be bounded below by a positive computable function. The agent has a utility function on outputs from the environment. We show that if this utility function is bounded below in absolute value by an unbounded computable function, then the expected utility of any input is undefined. This implies that a computable utility function will have convergent expected utilities iff that function is bounded. ","Convergence of Expected Utilities with Algorithmic Probability Distributions" " Most definitions of ontology, viewed as a ""specification of a conceptualization"", agree on the fact that if an ontology can take different forms, it necessarily includes a vocabulary of terms and some specification of their meaning in relation to the domain's conceptualization. And as domain knowledge is mainly conveyed through scientific and technical texts, we can hope to extract some useful information from them for building ontology. But is it as simple as this? In this article we shall see that the lexical structure, i.e. the network of words linked by linguistic relationships, does not necessarily match the domain conceptualization. We have to bear in mind that writing documents is the concern of textual linguistics, of which one of the principles is the incompleteness of text, whereas building ontology - viewed as task-independent knowledge - is concerned with conceptualization based on formal and not natural languages. Nevertheless, the famous Sapir and Whorf hypothesis, concerning the interdependence of thought and language, is also applicable to formal languages. This means that the way an ontology is built and a concept is defined depends directly on the formal language which is used; and the results will not be the same. The introduction of the notion of ontoterminology allows to take into account epistemological principles for formal ontology building. ",Le terme et le concept : fondements d'une ontoterminologie " Stream computing is the use of multiple autonomic and parallel modules together with integrative processors at a higher level of abstraction to embody ""intelligent"" processing. The biological basis of this computing is sketched and the matter of learning is examined. ",Stream Computing " In this paper we introduce a new selection scheme in cellular genetic algorithms (cGAs). Anisotropic Selection (AS) promotes diversity and allows accurate control of the selective pressure. First we compare this new scheme with the classical rectangular grid shapes solution according to the selective pressure: we can obtain the same takeover time with the two techniques although the spreading of the best individual is different. We then give experimental results that show to what extent AS promotes the emergence of niches that support low coupling and high cohesion. Finally, using a cGA with anisotropic selection on a Quadratic Assignment Problem we show the existence of an anisotropic optimal value for which the best average performance is observed. Further work will focus on the selective pressure self-adjustment ability provided by this new selection scheme. ",Anisotropic selection in cellular genetic algorithms " This philosophical paper explores the relation between modern scientific simulations and the future of the universe. We argue that a simulation of an entire universe will result from future scientific activity. This requires us to tackle the challenge of simulating open-ended evolution at all levels in a single simulation. The simulation should encompass not only biological evolution, but also physical evolution (a level below) and cultural evolution (a level above). The simulation would allow us to probe what would happen if we would ""replay the tape of the universe"" with the same or different laws and initial conditions. We also distinguish between real-world and artificial-world modelling. Assuming that intelligent life could indeed simulate an entire universe, this leads to two tentative hypotheses. Some authors have argued that we may already be in a simulation run by an intelligent entity. Or, if such a simulation could be made real, this would lead to the production of a new universe. This last direction is argued with a careful speculative philosophical approach, emphasizing the imperative to find a solution to the heat death problem in cosmology. The reader is invited to consult Annex 1 for an overview of the logical structure of this paper. -- Keywords: far future, future of science, ALife, simulation, realization, cosmology, heat death, fine-tuning, physical eschatology, cosmological natural selection, cosmological artificial selection, artificial cosmogenesis, selfish biocosm hypothesis, meduso-anthropic principle, developmental singularity hypothesis, role of intelligent life. ","The Future of Scientific Simulations: from Artificial Life to Artificial Cosmogenesis" " This paper has been withdrawn. ",Serious Flaws in Korf et al.'s Analysis on Time Complexity of A* " In this paper, we describe a new algorithm that consists in combining an eye-tracker for minimizing the fatigue of a user during the evaluation process of Interactive Evolutionary Computation. The approach is then applied to the Interactive One-Max optimization problem. ","Eye-Tracking Evolutionary Algorithm to minimize user's fatigue in IEC applied to Interactive One-Max problem" " In this paper, I present a method to solve a node discovery problem in a networked organization. Covert nodes refer to the nodes which are not observable directly. They affect social interactions, but do not appear in the surveillance logs which record the participants of the social interactions. Discovering the covert nodes is defined as identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. A mathematical model is developed for the maximal likelihood estimation of the network behind the social interactions and for the identification of the suspicious logs. Precision, recall, and F measure characteristics are demonstrated with the dataset generated from a real organization and the computationally synthesized datasets. The performance is close to the theoretical limit for any covert nodes in the networks of any topologies and sizes if the ratio of the number of observation to the number of possible communication patterns is large. ",Node discovery in a networked organization " In an emergency situation, the actors need an assistance allowing them to react swiftly and efficiently. In this prospect, we present in this paper a decision support system that aims to prepare actors in a crisis situation thanks to a decision-making support. The global architecture of this system is presented in the first part. Then we focus on a part of this system which is designed to represent the information of the current situation. This part is composed of a multiagent system that is made of factual agents. Each agent carries a semantic feature and aims to represent a partial part of a situation. The agents develop thanks to their interactions by comparing their semantic features using proximity measures and according to specific ontologies. ","Multiagent Approach for the Representation of Information in a Decision Support System" " A new method is presented, that can help a person become aware of his or her unconscious preferences, and convey them to others in the form of verbal explanation. The method combines the concepts of reflection, visualization, and verbalization. The method was tested in an experiment where the unconscious preferences of the subjects for various artworks were investigated. In the experiment, two lessons were learned. The first is that it helps the subjects become aware of their unconscious preferences to verbalize weak preferences as compared with strong preferences through discussion over preference diagrams. The second is that it is effective to introduce an adjustable factor into visualization to adapt to the differences in the subjects and to foster their mutual understanding. ",Reflective visualization and verbalization of unconscious preference " This paper describes application of rough set theory, on the analysis of hydrocyclone operation. In this manner, using Self Organizing Map (SOM) as preprocessing step, best crisp granules of data are obtained. Then, using a combining of SOM and rough set theory (RST)-called SORST-, the dominant rules on the information table, obtained from laboratory tests, are extracted. Based on these rules, an approximate estimation on decision attribute is fulfilled. Finally, a brief comparison of this method with the SOM-NFIS system (briefly SONFIS) is highlighted. ",Application of Rough Set Theory to Analysis of Hydrocyclone Operation " We are interested in the problem of multiagent systems development for risk detecting and emergency response in an uncertain and partially perceived environment. The evaluation of the current situation passes by three stages inside the multiagent system. In a first time, the situation is represented in a dynamic way. The second step, consists to characterise the situation and finally, it is compared with other similar known situations. In this paper, we present an information modelling of an observed environment, that we have applied on the RoboCupRescue Simulation System. Information coming from the environment are formatted according to a taxonomy and using semantic features. The latter are defined thanks to a fine ontology of the domain and are managed by factual agents that aim to represent dynamically the current situation. ",Agent-Based Perception of an Environment in an Emergency Situation " In this paper, we study the influence of the selective pressure on the performance of cellular genetic algorithms. Cellular genetic algorithms are genetic algorithms where the population is embedded on a toroidal grid. This structure makes the propagation of the best so far individual slow down, and allows to keep in the population potentially good solutions. We present two selective pressure reducing strategies in order to slow down even more the best solution propagation. We experiment these strategies on a hard optimization problem, the quadratic assignment problem, and we show that there is a value for of the control parameter for both which gives the best performance. This optimal value does not find explanation on only the selective pressure, measured either by take over time and diversity evolution. This study makes us conclude that we need other tools than the sole selective pressure measures to explain the performances of cellular genetic algorithms. ","On the Influence of Selection Operators on Performances in Cellular Genetic Algorithms" " Review of: Brigitte Le Roux and Henry Rouanet, Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis, Kluwer, Dordrecht, 2004, xi+475 pp. ","Geometric Data Analysis, From Correspondence Analysis to Structured Data Analysis (book review)" " In this study, we introduce general frame of MAny Connected Intelligent Particles Systems (MACIPS). Connections and interconnections between particles get a complex behavior of such merely simple system (system in system).Contribution of natural computing, under information granulation theory, are the main topics of this spacious skeleton. Upon this clue, we organize two algorithms involved a few prominent intelligent computing and approximate reasoning methods: self organizing feature map (SOM), Neuro- Fuzzy Inference System and Rough Set Theory (RST). Over this, we show how our algorithms can be taken as a linkage of government-society interaction, where government catches various fashions of behavior: solid (absolute) or flexible. So, transition of such society, by changing of connectivity parameters (noise) from order to disorder is inferred. Add to this, one may find an indirect mapping among financial systems and eventual market fluctuations with MACIPS. Keywords: phase transition, SONFIS, SORST, many connected intelligent particles system, society-government interaction ",Phase transition in SONFIS&SORST " Affinity propagation clustering (AP) has two limitations: it is hard to know what value of parameter 'preference' can yield an optimal clustering solution, and oscillations cannot be eliminated automatically if occur. The adaptive AP method is proposed to overcome these limitations, including adaptive scanning of preferences to search space of the number of clusters for finding the optimal clustering solution, adaptive adjustment of damping factors to eliminate oscillations, and adaptive escaping from oscillations when the damping adjustment technique fails. Experimental results on simulated and real data sets show that the adaptive AP is effective and can outperform AP in quality of clustering results. ",Adaptive Affinity Propagation Clustering " Approximately more than 90% of all coal production in Iranian underground mines is derived directly longwall mining method. Out of seam dilution is one of the essential problems in these mines. Therefore the dilution can impose the additional cost of mining and milling. As a result, recognition of the effective parameters on the dilution has a remarkable role in industry. In this way, this paper has analyzed the influence of 13 parameters (attributed variables) versus the decision attribute (dilution value), so that using two approximate reasoning methods, namely Rough Set Theory (RST) and Self Organizing Neuro- Fuzzy Inference System (SONFIS) the best rules on our collected data sets has been extracted. The other benefit of later methods is to predict new unknown cases. So, the reduced sets (reducts) by RST have been obtained. Therefore the emerged results by utilizing mentioned methods shows that the high sensitive variables are thickness of layer, length of stope, rate of advance, number of miners, type of advancing. ","Assessment of effective parameters on dilution using approximate reasoning methods in longwall mining method, Iran coal mines" " This study, fundamentals of fuzzy block theory, and its application in assessment of stability in underground openings, has surveyed. Using fuzzy topics and inserting them in to key block theory, in two ways, fundamentals of fuzzy block theory has been presented. In indirect combining, by coupling of adaptive Neuro Fuzzy Inference System (NFIS) and classic block theory, we could extract possible damage parts around a tunnel. In direct solution, some principles of block theory, by means of different fuzzy facets theory, were rewritten. ",Toward Fuzzy block theory " This paper describes application of information granulation theory, on the analysis of hydrocyclone perforamance. In this manner, using a combining of Self Organizing Map (SOM) and Neuro-Fuzzy Inference System (NFIS), crisp and fuzzy granules are obtained(briefly called SONFIS). Balancing of crisp granules and sub fuzzy granules, within non fuzzy information (initial granulation), is rendered in an open-close iteration. Using two criteria, ""simplicity of rules ""and ""adaptive threoshold error level"", stability of algorithm is guaranteed. Validation of the proposed method, on the data set of the hydrocyclone is rendered. ","Analysis of hydrocyclone performance based on information granulation theory" " In everyday life it happens that a person has to reason about what other people think and how they behave, in order to achieve his goals. In other words, an individual may be required to adapt his behaviour by reasoning about the others' mental state. In this paper we focus on a knowledge representation language derived from logic programming which both supports the representation of mental states of individual communities and provides each with the capability of reasoning about others' mental states and acting accordingly. The proposed semantics is shown to be translatable into stable model semantics of logic programs with aggregates. ",Logic programming with social features " Many social Web sites allow users to publish content and annotate with descriptive metadata. In addition to flat tags, some social Web sites have recently began to allow users to organize their content and metadata hierarchically. The social photosharing site Flickr, for example, allows users to group related photos in sets, and related sets in collections. The social bookmarking site Del.icio.us similarly lets users group related tags into bundles. Although the sites themselves don't impose any constraints on how these hierarchies are used, individuals generally use them to capture relationships between concepts, most commonly the broader/narrower relations. Collective annotation of content with hierarchical relations may lead to an emergent classification system, called a folksonomy. While some researchers have explored using tags as evidence for learning folksonomies, we believe that hierarchical relations described above offer a high-quality source of evidence for this task. We propose a simple approach to aggregate shallow hierarchies created by many distinct Flickr users into a common folksonomy. Our approach uses statistics to determine if a particular relation should be retained or discarded. The relations are then woven together into larger hierarchies. Although we have not carried out a detailed quantitative evaluation of the approach, it looks very promising since it generates very reasonable, non-trivial hierarchies. ",Constructing Folksonomies from User-specified Relations on Flickr " We analyze the style and structure of story narrative using the case of film scripts. The practical importance of this is noted, especially the need to have support tools for television movie writing. We use the Casablanca film script, and scripts from six episodes of CSI (Crime Scene Investigation). For analysis of style and structure, we quantify various central perspectives discussed in McKee's book, ""Story: Substance, Structure, Style, and the Principles of Screenwriting"". Film scripts offer a useful point of departure for exploration of the analysis of more general narratives. Our methodology, using Correspondence Analysis, and hierarchical clustering, is innovative in a range of areas that we discuss. In particular this work is groundbreaking in taking the qualitative analysis of McKee and grounding this analysis in a quantitative and algorithmic framework. ",The Structure of Narrative: the Case of Film Scripts " In the last year more than 70,000 people have been brought to the UK hospitals with serious injuries. Each time a clinician has to urgently take a patient through a screening procedure to make a reliable decision on the trauma treatment. Typically, such procedure comprises around 20 tests; however the condition of a trauma patient remains very difficult to be tested properly. What happens if these tests are ambiguously interpreted, and information about the severity of the injury will come misleading? The mistake in a decision can be fatal: using a mild treatment can put a patient at risk of dying from posttraumatic shock, while using an overtreatment can also cause death. How can we reduce the risk of the death caused by unreliable decisions? It has been shown that probabilistic reasoning, based on the Bayesian methodology of averaging over decision models, allows clinicians to evaluate the uncertainty in decision making. Based on this methodology, in this paper we aim at selecting the most important screening tests, keeping a high performance. We assume that the probabilistic reasoning within the Bayesian methodology allows us to discover new relationships between the screening tests and uncertainty in decisions. In practice, selection of the most informative tests can also reduce the cost of a screening procedure in trauma care centers. In our experiments we use the UK Trauma data to compare the efficiency of the proposed technique in terms of the performance. We also compare the uncertainty in decisions in terms of entropy. ",Feature Selection for Bayesian Evaluation of Trauma Death Risk " We present in this article a new evaluation method for classification and segmentation of textured images in uncertain environments. In uncertain environments, real classes and boundaries are known with only a partial certainty given by the experts. Most of the time, in many presented papers, only classification or only segmentation are considered and evaluated. Here, we propose to take into account both the classification and segmentation results according to the certainty given by the experts. We present the results of this method on a fusion of classifiers of sonar images for a seabed characterization. ",Fusion for Evaluation of Image Classification in Uncertain Environments " The investigation of the terrorist attack is a time-critical task. The investigators have a limited time window to diagnose the organizational background of the terrorists, to run down and arrest the wire-pullers, and to take an action to prevent or eradicate the terrorist attack. The intuitive interface to visualize the intelligence data set stimulates the investigators' experience and knowledge, and aids them in decision-making for an immediately effective action. This paper presents a computational method to analyze the intelligence data set on the collective actions of the perpetrators of the attack, and to visualize it into the form of a social network diagram which predicts the positions where the wire-pullers conceals themselves. ","Intuitive visualization of the intelligence for the run-down of terrorist wire-pullers" " This paper describes application of information granulation theory, on the design of rock engineering flowcharts. Firstly, an overall flowchart, based on information granulation theory has been highlighted. Information granulation theory, in crisp (non-fuzzy) or fuzzy format, can take into account engineering experiences (especially in fuzzy shape-incomplete information or superfluous), or engineering judgments, in each step of designing procedure, while the suitable instruments modeling are employed. In this manner and to extension of soft modeling instruments, using three combinations of Self Organizing Map (SOM), Neuro-Fuzzy Inference System (NFIS), and Rough Set Theory (RST) crisp and fuzzy granules, from monitored data sets are obtained. The main underlined core of our algorithms are balancing of crisp(rough or non-fuzzy) granules and sub fuzzy granules, within non fuzzy information (initial granulation) upon the open-close iterations. Using different criteria on balancing best granules (information pockets), are obtained. Validations of our proposed methods, on the data set of in-situ permeability in rock masses in Shivashan dam, Iran have been highlighted. ",Rock mechanics modeling based on soft granulation theory " Knowledge-based economy forces companies in the nation to group together as a cluster in order to maintain their competitiveness in the world market. The cluster development relies on two key success factors which are knowledge sharing and collaboration between the actors in the cluster. Thus, our study tries to propose knowledge management system to support knowledge management activities within the cluster. To achieve the objectives of this study, ontology takes a very important role in knowledge management process in various ways; such as building reusable and faster knowledge-bases, better way for representing the knowledge explicitly. However, creating and representing ontology create difficulties to organization due to the ambiguity and unstructured of source of knowledge. Therefore, the objectives of this paper are to propose the methodology to create and represent ontology for the organization development by using knowledge engineering approach. The handicraft cluster in Thailand is used as a case study to illustrate our proposed methodology. ",An Ontology-based Knowledge Management System for Industry Clusters " Crisis response poses many of the most difficult information technology in crisis management. It requires information and communication-intensive efforts, utilized for reducing uncertainty, calculating and comparing costs and benefits, and managing resources in a fashion beyond those regularly available to handle routine problems. In this paper, we explore the benefits of artificial intelligence technologies in crisis response. This paper discusses the role of artificial intelligence technologies; namely, robotics, ontology and semantic web, and multi-agent systems in crisis response. ",The Role of Artificial Intelligence Technologies in Crisis Response " We present and discuss a mixed conjunctive and disjunctive rule, a generalization of conflict repartition rules, and a combination of these two rules. In the belief functions theory one of the major problem is the conflict repartition enlightened by the famous Zadeh's example. To date, many combination rules have been proposed in order to solve a solution to this problem. Moreover, it can be important to consider the specificity of the responses of the experts. Since few year some unification rules are proposed. We have shown in our previous works the interest of the proportional conflict redistribution rule. We propose here a mixed combination rule following the proportional conflict redistribution rule modified by a discounting procedure. This rule generalizes many combination rules. ","Toward a combination rule to deal with partial conflict and specificity in belief functions theory" " In this chapter, we present and discuss a new generalized proportional conflict redistribution rule. The Dezert-Smarandache extension of the Demster-Shafer theory has relaunched the studies on the combination rules especially for the management of the conflict. Many combination rules have been proposed in the last few years. We study here different combination rules and compare them in terms of decision on didactic example and on generated data. Indeed, in real applications, we need a reliable decision and it is the final results that matter. This chapter shows that a fine proportional conflict redistribution rule must be preferred for the combination in the belief function theory. ","A new generalization of the proportional conflict redistribution rule stable in terms of decision" " These last years, there were many studies on the problem of the conflict coming from information combination, especially in evidence theory. We can summarise the solutions for manage the conflict into three different approaches: first, we can try to suppress or reduce the conflict before the combination step, secondly, we can manage the conflict in order to give no influence of the conflict in the combination step, and then take into account the conflict in the decision step, thirdly, we can take into account the conflict in the combination step. The first approach is certainly the better, but not always feasible. It is difficult to say which approach is the best between the second and the third. However, the most important is the produced results in applications. We propose here a new combination rule that distributes the conflict proportionally on the element given this conflict. We compare these different combination rules on real data in Sonar imagery and Radar target classification. ","Une nouvelle r\`egle de combinaison r\'epartissant le conflit - Applications en imagerie Sonar et classification de cibles Radar" " When implementing a propagator for a constraint, one must decide about variants: When implementing min, should one also implement max? Should one implement linear equations both with and without coefficients? Constraint variants are ubiquitous: implementing them requires considerable (if not prohibitive) effort and decreases maintainability, but will deliver better performance. This paper shows how to use variable views, previously introduced for an implementation architecture, to derive perfect propagator variants. A model for views and derived propagators is introduced. Derived propagators are proved to be indeed perfect in that they inherit essential properties such as correctness and domain and bounds consistency. Techniques for systematically deriving propagators such as transformation, generalization, specialization, and channeling are developed for several variable domains. We evaluate the massive impact of derived propagators. Without derived propagators, Gecode would require 140000 rather than 40000 lines of code for propagators. ",Perfect Derived Propagators " A serious defect with the Halpern-Pearl (HP) definition of causality is repaired by combining a theory of causality with a theory of defaults. In addition, it is shown that (despite a claim to the contrary) a cause according to the HP condition need not be a single conjunct. A definition of causality motivated by Wright's NESS test is shown to always hold for a single conjunct. Moreover, conditions that hold for all the examples considered by HP are given that guarantee that causality according to (this version) of the NESS test is equivalent to the HP definition. ",Defaults and Normality in Causal Structures " This paper has been withdrawn by the author due to extremely unscientific errors. ",The model of quantum evolution " The textured images' classification assumes to consider the images in terms of area with the same texture. In uncertain environment, it could be better to take an imprecise decision or to reject the area corresponding to an unlearning class. Moreover, on the areas that are the classification units, we can have more than one texture. These considerations allows us to develop a belief decision model permitting to reject an area as unlearning and to decide on unions and intersections of learning classes. The proposed approach finds all its justification in an application of seabed characterization from sonar images, which contributes to an illustration. ",Belief decision support and reject for textured images characterization " We study two aspects of information semantics: (i) the collection of all relationships, (ii) tracking and spotting anomaly and change. The first is implemented by endowing all relevant information spaces with a Euclidean metric in a common projected space. The second is modelled by an induced ultrametric. A very general way to achieve a Euclidean embedding of different information spaces based on cross-tabulation counts (and from other input data formats) is provided by Correspondence Analysis. From there, the induced ultrametric that we are particularly interested in takes a sequential - e.g. temporal - ordering of the data into account. We employ such a perspective to look at narrative, ""the flow of thought and the flow of language"" (Chafe). In application to policy decision making, we show how we can focus analysis in a small number of dimensions. ","The Correspondence Analysis Platform for Uncovering Deep Structure in Data and Information" " In this paper we extend Inagaki Weighted Operators fusion rule (WO) in information fusion by doing redistribution of not only the conflicting mass, but also of masses of non-empty intersections, that we call Double Weighted Operators (DWO). Then we propose a new fusion rule Class of Proportional Redistribution of Intersection Masses (CPRIM), which generates many interesting particular fusion rules in information fusion. Both formulas are presented for any number of sources of information. An application and comparison with other fusion rules are given in the last section. ","Extension of Inagaki General Weighted Operators and A New Fusion Rule Class of Proportional Redistribution of Intersection Masses" " In this chapter, we propose a new practical codification of the elements of the Venn diagram in order to easily manipulate the focal elements. In order to reduce the complexity, the eventual constraints must be integrated in the codification at the beginning. Hence, we only consider a reduced hyper power set $D_r^\Theta$ that can be $2^\Theta$ or $D^\Theta$. We describe all the steps of a general belief function framework. The step of decision is particularly studied, indeed, when we can decide on intersections of the singletons of the discernment space no actual decision functions are easily to use. Hence, two approaches are proposed, an extension of previous one and an approach based on the specificity of the elements on which to decide. The principal goal of this chapter is to provide practical codes of a general belief function framework for the researchers and users needing the belief function theory. ","Implementing general belief function framework with a practical codification for low complexity" " In this paper, we propose in Dezert-Smarandache Theory (DSmT) framework, a new probabilistic transformation, called DSmP, in order to build a subjective probability measure from any basic belief assignment defined on any model of the frame of discernment. Several examples are given to show how the DSmP transformation works and we compare it to main existing transformations proposed in the literature so far. We show the advantages of DSmP over classical transformations in term of Probabilistic Information Content (PIC). The direct extension of this transformation for dealing with qualitative belief assignments is also presented. ",A new probabilistic transformation of belief mass assignment " We discuss metacognitive modelling as an enhancement to cognitive modelling and computing. Metacognitive control mechanisms should enable AI systems to self-reflect, reason about their actions, and to adapt to new situations. In this respect, we propose implementation details of a knowledge taxonomy and an augmented data mining life cycle which supports a live integration of obtained models. ","On Introspection, Metacognitive Control and Augmented Data Mining Live Cycles" " Each cognitive science tries to understand a set of cognitive behaviors. The structuring of knowledge of this nature's aspect is far from what it can be expected about a science. Until now universal standard consistently describing the set of cognitive behaviors has not been found, and there are many questions about the cognitive behaviors for which only there are opinions of members of the scientific community. This article has three proposals. The first proposal is to raise to the scientific community the necessity of unified the cognitive behaviors. The second proposal is claim the application of the Newton's reasoning rules about nature of his book, Philosophiae Naturalis Principia Mathematica, to the cognitive behaviors. The third is to propose a scientific theory, currently developing, that follows the rules established by Newton to make sense of nature, and could be the theory to explain all the cognitive behaviors. ",Hacia una teoria de unificacion para los comportamientos cognitivos " In this article we review standard null-move pruning and introduce our extended version of it, which we call verified null-move pruning. In verified null-move pruning, whenever the shallow null-move search indicates a fail-high, instead of cutting off the search from the current node, the search is continued with reduced depth. Our experiments with verified null-move pruning show that on average, it constructs a smaller search tree with greater tactical strength in comparison to standard null-move pruning. Moreover, unlike standard null-move pruning, which fails badly in zugzwang positions, verified null-move pruning manages to detect most zugzwangs and in such cases conducts a re-search to obtain the correct result. In addition, verified null-move pruning is very easy to implement, and any standard null-move pruning program can use verified null-move pruning by modifying only a few lines of code. ",Verified Null-Move Pruning " We extend Knuth's 16 Boolean binary logic operators to fuzzy logic and neutrosophic logic binary operators. Then we generalize them to n-ary fuzzy logic and neutrosophic logic operators using the smarandache codification of the Venn diagram and a defined vector neutrosophic law. In such way, new operators in neutrosophic logic/set/probability are built. ",n-ary Fuzzy Logic and Neutrosophic Logic Operators " Various local search approaches have recently been applied to machine scheduling problems under multiple objectives. Their foremost consideration is the identification of the set of Pareto optimal alternatives. An important aspect of successfully solving these problems lies in the definition of an appropriate neighbourhood structure. Unclear in this context remains, how interdependencies within the fitness landscape affect the resolution of the problem. The paper presents a study of neighbourhood search operators for multiple objective flow shop scheduling. Experiments have been carried out with twelve different combinations of criteria. To derive exact conclusions, small problem instances, for which the optimal solutions are known, have been chosen. Statistical tests show that no single neighbourhood operator is able to equally identify all Pareto optimal alternatives. Significant improvements however have been obtained by hybridising the solution algorithm using a randomised variable neighbourhood search technique. ","Randomised Variable Neighbourhood Search for Multi Objective Optimisation" " The paper describes the proposition and application of a local search metaheuristic for multi-objective optimization problems. It is based on two main principles of heuristic search, intensification through variable neighborhoods, and diversification through perturbations and successive iterations in favorable regions of the search space. The concept is successfully tested on permutation flow shop scheduling problems under multiple objectives. While the obtained results are encouraging in terms of their quality, another positive attribute of the approach is its' simplicity as it does require the setting of only very few parameters. The implementation of the Pareto Iterated Local Search metaheuristic is based on the MOOPPS computer system of local search heuristics for multi-objective scheduling which has been awarded the European Academic Software Award 2002 in Ronneby, Sweden (http://www.easa-award.net/, http://www.bth.se/llab/easa_2002.nsf) ",Foundations of the Pareto Iterated Local Search Metaheuristic " The article describes an investigation of the effectiveness of genetic algorithms for multi-objective combinatorial optimization (MOCO) by presenting an application for the vehicle routing problem with soft time windows. The work is motivated by the question, if and how the problem structure influences the effectiveness of different configurations of the genetic algorithm. Computational results are presented for different classes of vehicle routing problems, varying in their coverage with time windows, time window size, distribution and number of customers. The results are compared with a simple, but effective local search approach for multi-objective combinatorial optimization problems. ","A Computational Study of Genetic Crossover Operators for Multi-Objective Vehicle Routing Problem with Soft Time Windows" " The talk describes a general approach of a genetic algorithm for multiple objective optimization problems. A particular dominance relation between the individuals of the population is used to define a fitness operator, enabling the genetic algorithm to adress even problems with efficient, but convex-dominated alternatives. The algorithm is implemented in a multilingual computer program, solving vehicle routing problems with time windows under multiple objectives. The graphical user interface of the program shows the progress of the genetic algorithm and the main parameters of the approach can be easily modified. In addition to that, the program provides powerful decision support to the decision maker. The software has proved it's excellence at the finals of the European Academic Software Award EASA, held at the Keble college/ University of Oxford/ Great Britain. ",Genetic Algorithms for multiple objective vehicle routing " The article presents a framework for the resolution of rich vehicle routing problems which are difficult to address with standard optimization techniques. We use local search on the basis on variable neighborhood search for the construction of the solutions, but embed the techniques in a flexible framework that allows the consideration of complex side constraints of the problem such as time windows, multiple depots, heterogeneous fleets, and, in particular, multiple optimization criteria. In order to identify a compromise alternative that meets the requirements of the decision maker, an interactive procedure is integrated in the resolution of the problem, allowing the modification of the preference information articulated by the decision maker. The framework is prototypically implemented in a computer system. First results of test runs on multiple depot vehicle routing problems with time windows are reported. ","A framework for the interactive resolution of multi-objective vehicle routing problems" " The integration of fuzzy set theory and fuzzy logic into scheduling is a rather new aspect with growing importance for manufacturing applications, resulting in various unsolved aspects. In the current paper, we investigate an improved local search technique for fuzzy scheduling problems with fitness plateaus, using a multi criteria formulation of the problem. We especially address the problem of changing job priorities over time as studied at the Sherwood Press Ltd, a Nottingham based printing company, who is a collaborator on the project. ",Improving Local Search for Fuzzy Scheduling Problems " The article proposes a heuristic approximation approach to the bin packing problem under multiple objectives. In addition to the traditional objective of minimizing the number of bins, the heterogeneousness of the elements in each bin is minimized, leading to a biobjective formulation of the problem with a tradeoff between the number of bins and their heterogeneousness. An extension of the Best-Fit approximation algorithm is presented to solve the problem. Experimental investigations have been carried out on benchmark instances of different size, ranging from 100 to 1000 items. Encouraging results have been obtained, showing the applicability of the heuristic approach to the described problem. ","Bin Packing Under Multiple Objectives - a Heuristic Approximation Approach" " The article presents a local search approach for the solution of timetabling problems in general, with a particular implementation for competition track 3 of the International Timetabling Competition 2007 (ITC 2007). The heuristic search procedure is based on Threshold Accepting to overcome local optima. A stochastic neighborhood is proposed and implemented, randomly removing and reassigning events from the current solution. The overall concept has been incrementally obtained from a series of experiments, which we describe in each (sub)section of the paper. In result, we successfully derived a potential candidate solution approach for the finals of track 3 of the ITC 2007. ","An application of the Threshold Accepting metaheuristic for curriculum based course timetabling" " The paper presents a study of local search heuristics in general and variable neighborhood search in particular for the resolution of an assignment problem studied in the practical work of universities. Here, students have to be assigned to scientific topics which are proposed and supported by members of staff. The problem involves the optimization under given preferences of students which may be expressed when applying for certain topics. It is possible to observe that variable neighborhood search leads to superior results for the tested problem instances. One instance is taken from an actual case, while others have been generated based on the real world data to support the analysis with a deeper analysis. An extension of the problem has been formulated by integrating a second objective function that simultaneously balances the workload of the members of staff while maximizing utility of the students. The algorithmic approach has been prototypically implemented in a computer system. One important aspect in this context is the application of the research work to problems of other scientific institutions, and therefore the provision of decision support functionalities. ","Variable Neighborhood Search for the University Lecturer-Student Assignment Problem" " We introduce an extended tableau calculus for answer set programming (ASP). The proof system is based on the ASP tableaux defined in [Gebser&Schaub, ICLP 2006], with an added extension rule. We investigate the power of Extended ASP Tableaux both theoretically and empirically. We study the relationship of Extended ASP Tableaux with the Extended Resolution proof system defined by Tseitin for sets of clauses, and separate Extended ASP Tableaux from ASP Tableaux by giving a polynomial-length proof for a family of normal logic programs P_n for which ASP Tableaux has exponential-length minimal proofs with respect to n. Additionally, Extended ASP Tableaux imply interesting insight into the effect of program simplification on the lengths of proofs in ASP. Closely related to Extended ASP Tableaux, we empirically investigate the effect of redundant rules on the efficiency of ASP solving. To appear in Theory and Practice of Logic Programming (TPLP). ",Extended ASP tableaux and rule redundancy in normal logic programs " In this paper, a Gaifman-Shapiro-style module architecture is tailored to the case of Smodels programs under the stable model semantics. The composition of Smodels program modules is suitably limited by module conditions which ensure the compatibility of the module system with stable models. Hence the semantics of an entire Smodels program depends directly on stable models assigned to its modules. This result is formalized as a module theorem which truly strengthens Lifschitz and Turner's splitting-set theorem for the class of Smodels programs. To streamline generalizations in the future, the module theorem is first proved for normal programs and then extended to cover Smodels programs using a translation from the latter class of programs to the former class. Moreover, the respective notion of module-level equivalence, namely modular equivalence, is shown to be a proper congruence relation: it is preserved under substitutions of modules that are modularly equivalent. Principles for program decomposition are also addressed. The strongly connected components of the respective dependency graph can be exploited in order to extract a module structure when there is no explicit a priori knowledge about the modules of a program. The paper includes a practical demonstration of tools that have been developed for automated (de)composition of Smodels programs. To appear in Theory and Practice of Logic Programming. ","Achieving compositionality of the stable model semantics for Smodels programs" " Most research related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, novelties are rare in this small sub-field of term extraction. In addition, existing work were mostly empirically motivated and derived. We propose a new probabilistically-derived measure, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our comparative study using 1,825 test cases against an existing empirically-derived function revealed an improvement in terms of precision, recall and accuracy. ","Determining the Unithood of Word Sequences using a Probabilistic Approach" " Most works related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, the number of independent research that study the notion of unithood and produce dedicated techniques for measuring unithood is extremely small. We propose a new approach, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our evaluations revealed a precision and recall of 98.68% and 91.82% respectively with an accuracy at 95.42% in measuring the unithood of 1005 test cases. ","Determining the Unithood of Word Sequences using Mutual Information and Independence Measure" " An increasing number of approaches for ontology engineering from text are gearing towards the use of online sources such as company intranet and the World Wide Web. Despite such rise, not much work can be found in aspects of preprocessing and cleaning dirty texts from online sources. This paper presents an enhancement of an Integrated Scoring for Spelling error correction, Abbreviation expansion and Case restoration (ISSAC). ISSAC is implemented as part of a text preprocessing phase in an ontology engineering system. New evaluations performed on the enhanced ISSAC using 700 chat records reveal an improved accuracy of 98% as compared to 96.5% and 71% based on the use of only basic ISSAC and of Aspell, respectively. ",Enhanced Integrated Scoring for Cleaning Dirty Texts " We present a domain-independent algorithm that computes macros in a novel way. Our algorithm computes macros ""on-the-fly"" for a given set of states and does not require previously learned or inferred information, nor prior domain knowledge. The algorithm is used to define new domain-independent tractable classes of classical planning that are proved to include \emph{Blocksworld-arm} and \emph{Towers of Hanoi}. ",On-the-fly Macros " In this study, we reproduce two new hybrid intelligent systems, involve three prominent intelligent computing and approximate reasoning methods: Self Organizing feature Map (SOM), Neruo-Fuzzy Inference System and Rough Set Theory (RST),called SONFIS and SORST. We show how our algorithms can be construed as a linkage of government-society interactions, where government catches various states of behaviors: solid (absolute) or flexible. So, transition of society, by changing of connectivity parameters (noise) from order to disorder is inferred. ",Modeling of Social Transitions Using Intelligent Systems " The paper presents the investigation and implementation of the relationship between diversity and the performance of multiple classifiers on classification accuracy. The study is critical as to build classifiers that are strong and can generalize better. The parameters of the neural network within the committee were varied to induce diversity; hence structural diversity is the focus for this study. The hidden nodes and the activation function are the parameters that were varied. The diversity measures that were adopted from ecology such as Shannon and Simpson were used to quantify diversity. Genetic algorithm is used to find the optimal ensemble by using the accuracy as the cost function. The results observed shows that there is a relationship between structural diversity and accuracy. It is observed that the classification accuracy of an ensemble increases as the diversity increases. There was an increase of 3%-6% in the classification accuracy. ","Relationship between Diversity and Perfomance of Multiple Classifiers for Decision Support" " The paper presents an exponential pheromone deposition rule to modify the basic ant system algorithm which employs constant deposition rule. A stability analysis using differential equation is carried out to find out the values of parameters that make the ant system dynamics stable for both kinds of deposition rule. A roadmap of connected cities is chosen as the problem environment where the shortest route between two given cities is required to be discovered. Simulations performed with both forms of deposition approach using Elitist Ant System model reveal that the exponential deposition approach outperforms the classical one by a large extent. Exhaustive experiments are also carried out to find out the optimum setting of different controlling parameters for exponential deposition approach and an empirical relationship between the major controlling parameters of the algorithm and some features of problem environment. ","Balancing Exploration and Exploitation by an Elitist Ant System with Exponential Pheromone Deposition Rule" " This article presents a unique design for a parser using the Ant Colony Optimization algorithm. The paper implements the intuitive thought process of human mind through the activities of artificial ants. The scheme presented here uses a bottom-up approach and the parsing program can directly use ambiguous or redundant grammars. We allocate a node corresponding to each production rule present in the given grammar. Each node is connected to all other nodes (representing other production rules), thereby establishing a completely connected graph susceptible to the movement of artificial ants. Each ant tries to modify this sentential form by the production rule present in the node and upgrades its position until the sentential form reduces to the start symbol S. Successful ants deposit pheromone on the links that they have traversed through. Eventually, the optimum path is discovered by the links carrying maximum amount of pheromone concentration. The design is simple, versatile, robust and effective and obviates the calculation of the above mentioned sets and precedence relation tables. Further advantages of our scheme lie in i) ascertaining whether a given string belongs to the language represented by the grammar, and ii) finding out the shortest possible path from the given string to the start symbol S in case multiple routes exist. ",A Novel Parser Design Algorithm Based on Artificial Ants " The paper presents an exponential pheromone deposition approach to improve the performance of classical Ant System algorithm which employs uniform deposition rule. A simplified analysis using differential equations is carried out to study the stability of basic ant system dynamics with both exponential and constant deposition rules. A roadmap of connected cities, where the shortest path between two specified cities are to be found out, is taken as a platform to compare Max-Min Ant System model (an improved and popular model of Ant System algorithm) with exponential and constant deposition rules. Extensive simulations are performed to find the best parameter settings for non-uniform deposition approach and experiments with these parameter settings revealed that the above approach outstripped the traditional one by a large extent in terms of both solution quality and convergence time. ","Extension of Max-Min Ant System with Exponential Pheromone Deposition Rule" " We address here two major challenges presented by dynamic data mining: 1) the stability challenge: we have implemented a rigorous incremental density-based clustering algorithm, independent from any initial conditions and ordering of the data-vectors stream, 2) the cognitive challenge: we have implemented a stringent selection process of association rules between clusters at time t-1 and time t for directly generating the main conclusions about the dynamics of a data-stream. We illustrate these points with an application to a two years and 2600 documents scientific information database. ","Document stream clustering: experimenting an incremental algorithm and AR-based tools for highlighting dynamic trends" " Data-stream clustering is an ever-expanding subdomain of knowledge extraction. Most of the past and present research effort aims at efficient scaling up for the huge data repositories. Our approach focuses on qualitative improvement, mainly for ""weak signals"" detection and precise tracking of topical evolutions in the framework of information watch - though scalability is intrinsically guaranteed in a possibly distributed implementation. Our GERMEN algorithm exhaustively picks up the whole set of density peaks of the data at time t, by identifying the local perturbations induced by the current document vector, such as changing cluster borders, or new/vanishing clusters. Optimality yields from the uniqueness 1) of the density landscape for any value of our zoom parameter, 2) of the cluster allocation operated by our border propagation rule. This results in a rigorous independence from the data presentation ranking or any initialization parameter. We present here as a first step the only assessment of a static view resulting from one year of the CNRS/INIST Pascal database in the field of geotechnics. ","Classification dynamique d'un flux documentaire : une \'evaluation statique pr\'ealable de l'algorithme GERMEN" " This paper gives an introduction to this issue, and presents the framework and the main steps of the Rosa project. Four teams of researchers, agronomists, computer scientists, psychologists and linguists were involved during five years within this project that aimed at the development of a knowledge based system. The purpose of the Rosa system is the modelling and the comparison of farm spatial organizations. It relies on a formalization of agronomical knowledge and thus induces a joint knowledge building process involving both the agronomists and the computer scientists. The paper describes the steps of the modelling process as well as the filming procedures set up by the psychologists and linguists in order to make explicit and to analyze the underlying knowledge building process. ","\'Etude longitudinale d'une proc\'edure de mod\'elisation de connaissances en mati\`ere de gestion du territoire agricole" " Collaborative tagging systems, such as Delicious, CiteULike, and others, allow users to annotate resources, e.g., Web pages or scientific papers, with descriptive labels called tags. The social annotations contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. Traditional text inference methods do not make best use of social annotation, since they do not take into account variations in individual users' perspectives and vocabulary. In a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated resources. Unfortunately, that approach had one major shortcoming: the number of topics and interests must be specified a priori. To address this drawback, we extend the model to a fully Bayesian framework, which offers a way to automatically estimate these numbers. In particular, the model allows the number of interests and topics to change as suggested by the structure of the data. We evaluate the proposed model in detail on the synthetic and real-world data by comparing its performance to Latent Dirichlet Allocation on the topic extraction task. For the latter evaluation, we apply the model to infer topics of Web resources from social annotations obtained from Delicious in order to discover new resources similar to a specified one. Our empirical results demonstrate that the proposed model is a promising method for exploiting social knowledge contained in user-generated annotations. ",Modeling Social Annotation: a Bayesian Approach " Airport gate assignment is of great importance in airport operations. In this paper, we study the Airport Gate Assignment Problem (AGAP), propose a new model and implement the model with Optimization Programming language (OPL). With the objective to minimize the number of conflicts of any two adjacent aircrafts assigned to the same gate, we build a mathematical model with logical constraints and the binary constraints, which can provide an efficient evaluation criterion for the Airlines to estimate the current gate assignment. To illustrate the feasibility of the model we construct experiments with the data obtained from Continental Airlines, Houston Gorge Bush Intercontinental Airport IAH, which indicate that our model is both energetic and effective. Moreover, we interpret experimental results, which further demonstrate that our proposed model can provide a powerful tool for airline companies to estimate the efficiency of their current work of gate assignment. ",Airport Gate Assignment: New Model and Implementation " This paper investigates the use of different Artificial Intelligence methods to predict the values of several continuous variables from a Steam Generator. The objective was to determine how the different artificial intelligence methods performed in making predictions on the given dataset. The artificial intelligence methods evaluated were Neural Networks, Support Vector Machines, and Adaptive Neuro-Fuzzy Inference Systems. The types of neural networks investigated were Multi-Layer Perceptions, and Radial Basis Function. Bayesian and committee techniques were applied to these neural networks. Each of the AI methods considered was simulated in Matlab. The results of the simulations showed that all the AI methods were capable of predicting the Steam Generator data reasonably accurately. However, the Adaptive Neuro-Fuzzy Inference system out performed the other methods in terms of accuracy and ease of implementation, while still achieving a fast execution time as well as a reasonable training time. ",Artificial Intelligence Techniques for Steam Generator Modelling " Theoretical analysis of machine intelligence (MI) is useful for defining a common platform in both theoretical and applied artificial intelligence (AI). The goal of this paper is to set canonical definitions that can assist pragmatic research in both strong and weak AI. Described epistemological features of machine intelligence include relationship between intelligent behavior, intelligent and unintelligent machine characteristics, observable and unobservable entities and classification of intelligence. The paper also establishes algebraic definitions of efficiency and accuracy of MI tests as their quality measure. The last part of the paper addresses the learning process with respect to the traditional epistemology and the epistemology of MI described here. The proposed views on MI positively correlate to the Hegelian monistic epistemology and contribute towards amalgamating idealistic deliberations with the AI theory, particularly in a local frame of reference. ",Elementary epistemological features of machine intelligence " Answer set programming (ASP) is a logic programming paradigm that can be used to solve complex combinatorial search problems. Aggregates are an ASP construct that plays an important role in many applications. Defining a satisfactory semantics of aggregates turned out to be a difficult problem, and in this paper we propose a new approach, based on an analogy between aggregates and propositional connectives. First, we extend the definition of an answer set/stable model to cover arbitrary propositional theories; then we define aggregates on top of them both as primitive constructs and as abbreviations for formulas. Our definition of an aggregate combines expressiveness and simplicity, and it inherits many theorems about programs with nested expressions, such as theorems about strong equivalence and splitting. ",Logic programs with propositional connectives and aggregates " The standard classification of emotions involves categorizing the expression of emotions. In this paper, parameters underlying some emotions are identified and a new classification based on these parameters is suggested. ","Identification of parameters underlying emotions and a classification of emotions" " Neural networks are powerful tools for classification and regression in static environments. This paper describes a technique for creating an ensemble of neural networks that adapts dynamically to changing conditions. The model separates the input space into four regions and each network is given a weight in each region based on its performance on samples from that region. The ensemble adapts dynamically by constantly adjusting these weights based on the current performance of the networks. The data set used is a collection of financial indicators with the goal of predicting the future platinum price. An ensemble with no weightings does not improve on the naive estimate of no weekly change; our weighting algorithm gives an average percentage error of 63% for twenty weeks of prediction. ","Prediction of Platinum Prices Using Dynamically Weighted Mixture of Experts" " Health Practice Guideliens are supposed to unify practices and propose recommendations to physicians. This paper describes GemFrame, a system capable of semi-automatically filling an XML template from free texts in the clinical domain. The XML template includes semantic information not explicitly encoded in the text (pairs of conditions and ac-tions/recommendations). Therefore, there is a need to compute the exact scope of condi-tions over text sequences expressing the re-quired actions. We present a system developped for this task. We show that it yields good performance when applied to the analysis of French practice guidelines. We conclude with a precise evaluation of the tool. ","Analyse et structuration automatique des guides de bonnes pratiques cliniques : essai d'\'evaluation" " The need for domain ontologies in mission critical applications such as risk management and hazard identification is becoming more and more pressing. Most research on ontology learning conducted in the academia remains unrealistic for real-world applications. One of the main problems is the dependence on non-incremental, rare knowledge and textual resources, and manually-crafted patterns and rules. This paper reports work in progress aiming to address such undesirable dependencies during ontology construction. Initial experiments using a working prototype of the system revealed promising potentials in automatically constructing high-quality domain ontologies using real-world texts. ","Automatic Construction of Lightweight Domain Ontologies for Chemical Engineering Risk Management" " We introduce novel results for approximate inference on planar graphical models using the loop calculus framework. The loop calculus (Chertkov and Chernyak, 2006) allows to express the exact partition function of a graphical model as a finite sum of terms that can be evaluated once the belief propagation (BP) solution is known. In general, full summation over all correction terms is intractable. We develop an algorithm for the approach presented in (Certkov et al., 2008) which represents an efficient truncation scheme on planar graphs and a new representation of the series in terms of Pfaffians of matrices. We analyze the performance of the algorithm for the partition function approximation for models with binary variables and pairwise interactions on grids and other planar graphs. We study in detail both the loop series and the equivalent Pfaffian series and show that the first term of the Pfaffian series for the general, intractable planar model, can provide very accurate approximations. The algorithm outperforms previous truncation schemes of the loop series and is competitive with other state-of-the-art methods for approximate inference. ","Approximate inference on planar graphs using Loop Calculus and Belief Propagation" " In this paper we present the N-norms/N-conorms in neutrosophic logic and set as extensions of T-norms/T-conorms in fuzzy logic and set. Also, as an extension of the Intuitionistic Fuzzy Topology we present the Neutrosophic Topologies. ","N-norm and N-conorm in Neutrosophic Logic and Set, and the Neutrosophic Topologies" " When a considerable number of mutations have no effects on fitness values, the fitness landscape is said neutral. In order to study the interplay between neutrality, which exists in many real-world applications, and performances of metaheuristics, it is useful to design landscapes which make it possible to tune precisely neutral degree distribution. Even though many neutral landscape models have already been designed, none of them are general enough to create landscapes with specific neutral degree distributions. We propose three steps to design such landscapes: first using an algorithm we construct a landscape whose distribution roughly fits the target one, then we use a simulated annealing heuristic to bring closer the two distributions and finally we affect fitness values to each neutral network. Then using this new family of fitness landscapes we are able to highlight the interplay between deceptiveness and neutrality. ",Deceptiveness and Neutrality - the ND family of fitness landscapes " The pharmacovigilance databases consist of several case reports involving drugs and adverse events (AEs). Some methods are applied consistently to highlight all signals, i.e. all statistically significant associations between a drug and an AE. These methods are appropriate for verification of more complex relationships involving one or several drug(s) and AE(s) (e.g; syndromes or interactions) but do not address the identification of them. We propose a method for the extraction of these relationships based on Formal Concept Analysis (FCA) associated with disproportionality measures. This method identifies all sets of drugs and AEs which are potential signals, syndromes or interactions. Compared to a previous experience of disproportionality analysis without FCA, the addition of FCA was more efficient for identifying false positives related to concomitant drugs. ",Mining for adverse drug events with formal concept analysis " Domain experts should provide relevant domain knowledge to an Intelligent Tutoring System (ITS) so that it can guide a learner during problemsolving learning activities. However, for many ill-defined domains, the domain knowledge is hard to define explicitly. In previous works, we showed how sequential pattern mining can be used to extract a partial problem space from logged user interactions, and how it can support tutoring services during problem-solving exercises. This article describes an extension of this approach to extract a problem space that is richer and more adapted for supporting tutoring services. We combined sequential pattern mining with (1) dimensional pattern mining (2) time intervals, (3) the automatic clustering of valued actions and (4) closed sequences mining. Some tutoring services have been implemented and an experiment has been conducted in a tutoring system. ","A Knowledge Discovery Framework for Learning Task Models from User Interactions in Intelligent Tutoring Systems" " In this paper we propose the CTS (Concious Tutoring System) technology, a biologically plausible cognitive agent based on human brain functions.This agent is capable of learning and remembering events and any related information such as corresponding procedures, stimuli and their emotional valences. Our proposed episodic memory and episodic learning mechanism are closer to the current multiple-trace theory in neuroscience, because they are inspired by it [5] contrary to other mechanisms that are incorporated in cognitive agents. This is because in our model emotions play a role in the encoding and remembering of events. This allows the agent to improve its behavior by remembering previously selected behaviors which are influenced by its emotional mechanism. Moreover, the architecture incorporates a realistic memory consolidation process based on a data mining algorithm. ",How Emotional Mechanism Helps Episodic Learning in a Cognitive Agent " Consumers of mass media must have a comprehensive, balanced and plural selection of news to get an unbiased perspective; but achieving this goal can be very challenging, laborious and time consuming. News stories development over time, its (in)consistency, and different level of coverage across the media outlets are challenges that a conscientious reader has to overcome in order to alleviate bias. In this paper we present an intelligent agent framework currently facilitating analysis of the main sources of on-line news in El Salvador. We show how prior tools of text analysis and Web 2.0 technologies can be combined with minimal manual intervention to help individuals on their rational decision process, while holding media outlets accountable for their work. ",Alleviating Media Bias Through Intelligent Agent Blogging " We study the logic of comparative concept similarity $\CSL$ introduced by Sheremet, Tishkovsky, Wolter and Zakharyaschev to capture a form of qualitative similarity comparison. In this logic we can formulate assertions of the form "" objects A are more similar to B than to C"". The semantics of this logic is defined by structures equipped by distance functions evaluating the similarity degree of objects. We consider here the particular case of the semantics induced by \emph{minspaces}, the latter being distance spaces where the minimum of a set of distances always exists. It turns out that the semantics over arbitrary minspaces can be equivalently specified in terms of preferential structures, typical of conditional logics. We first give a direct axiomatisation of this logic over Minspaces. We next define a decision procedure in the form of a tableaux calculus. Both the calculus and the axiomatisation take advantage of the reformulation of the semantics in terms of preferential structures. ","Comparative concept similarity over Minspaces: Axiomatisation and Tableaux Calculus" " Data mining algorithms are now able to efficiently deal with huge amount of data. Various kinds of patterns may be discovered and may have some great impact on the general development of knowledge. In many domains, end users may want to have their data mined by data mining tools in order to extract patterns that could impact their business. Nevertheless, those users are often overwhelmed by the large quantity of patterns extracted in such a situation. Moreover, some privacy issues, or some commercial one may lead the users not to be able to mine the data by themselves. Thus, the users may not have the possibility to perform many experiments integrating various constraints in order to focus on specific patterns they would like to extract. Post processing of patterns may be an answer to that drawback. Thus, in this paper we present a framework that could allow end users to manage collections of patterns. We propose to use an efficient data structure on which some algebraic operators may be used in order to retrieve or access patterns in pattern bases. ",A Model for Managing Collections of Patterns " Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks. ",Feature Hashing for Large Scale Multitask Learning " We propose a new extended format to represent constraint networks using XML. This format allows us to represent constraints defined either in extension or in intension. It also allows us to reference global constraints. Any instance of the problems CSP (Constraint Satisfaction Problem), QCSP (Quantified CSP) and WCSP (Weighted CSP) can be represented using this format. ",XML Representation of Constraint Networks: Format XCSP 2.1 " The present work consisted in developing a plateau game. There are the traditional ones (monopoly, cluedo, ect.) but those which interest us leave less place at the chance (luck) than to the strategy such that the chess game. Kallah is an old African game, its rules are simple but the strategies to be used are very complex to implement. Of course, they are based on a strongly mathematical basis as in the film ""Rain-Man"" where one can see that gambling can be payed with strategies based on mathematical theories. The Artificial Intelligence gives the possibility ""of thinking"" to a machine and, therefore, allows it to make decisions. In our work, we use it to give the means to the computer choosing its best movement. ",The Semantics of Kalah Game " 1-Nearest Neighbor with the Dynamic Time Warping (DTW) distance is one of the most effective classifiers on time series domain. Since the global constraint has been introduced in speech community, many global constraint models have been proposed including Sakoe-Chiba (S-C) band, Itakura Parallelogram, and Ratanamahatana-Keogh (R-K) band. The R-K band is a general global constraint model that can represent any global constraints with arbitrary shape and size effectively. However, we need a good learning algorithm to discover the most suitable set of R-K bands, and the current R-K band learning algorithm still suffers from an 'overfitting' phenomenon. In this paper, we propose two new learning algorithms, i.e., band boundary extraction algorithm and iterative learning algorithm. The band boundary extraction is calculated from the bound of all possible warping paths in each class, and the iterative learning is adjusted from the original R-K band learning. We also use a Silhouette index, a well-known clustering validation technique, as a heuristic function, and the lower bound function, LB_Keogh, to enhance the prediction speed. Twenty datasets, from the Workshop and Challenge on Time Series Classification, held in conjunction of the SIGKDD 2007, are used to evaluate our approach. ",Learning DTW Global Constraint for Time Series Classification " We propose Range and Roots which are two common patterns useful for specifying a wide range of counting and occurrence constraints. We design specialised propagation algorithms for these two patterns. Counting and occurrence constraints specified using these patterns thus directly inherit a propagation algorithm. To illustrate the capabilities of the Range and Roots constraints, we specify a number of global constraints taken from the literature. Preliminary experiments demonstrate that propagating counting and occurrence constraints using these two patterns leads to a small loss in performance when compared to specialised global constraints and is competitive with alternative decompositions using elementary constraints. ","Range and Roots: Two Common Patterns for Specifying and Propagating Counting and Occurrence Constraints" " The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this introduction, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT), developed for dealing with imprecise, uncertain and conflicting sources of information. We focus our presentation on the foundations of DSmT and on its most important rules of combination, rather than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout this presentation to show the efficiency and the generality of this new approach. ",An introduction to DSmT " When mathematicians present proofs they usually adapt their explanations to their didactic goals and to the (assumed) knowledge of their addressees. Modern automated theorem provers, in contrast, present proofs usually at a fixed level of detail (also called granularity). Often these presentations are neither intended nor suitable for human use. A challenge therefore is to develop user- and goal-adaptive proof presentation techniques that obey common mathematical practice. We present a flexible and adaptive approach to proof presentation that exploits machine learning techniques to extract a model of the specific granularity of proof examples and employs this model for the automated generation of further proofs at an adapted level of granularity. ",Granularity-Adaptive Proof Presentation " Symmetry is an important factor in solving many constraint satisfaction problems. One common type of symmetry is when we have symmetric values. In a recent series of papers, we have studied methods to break value symmetries. Our results identify computational limits on eliminating value symmetry. For instance, we prove that pruning all symmetric values is NP-hard in general. Nevertheless, experiments show that much value symmetry can be broken in practice. These results may be useful to researchers in planning, scheduling and other areas as value symmetry occurs in many different domains. ",Breaking Value Symmetry " An attractive mechanism to specify global constraints in rostering and other domains is via formal languages. For instance, the Regular and Grammar constraints specify constraints in terms of the languages accepted by an automaton and a context-free grammar respectively. Taking advantage of the fixed length of the constraint, we give an algorithm to transform a context-free grammar into an automaton. We then study the use of minimization techniques to reduce the size of such automata and speed up propagation. We show that minimizing such automata after they have been unfolded and domains initially reduced can give automata that are more compact than minimizing before unfolding and reducing. Experimental results show that such transformations can improve the size of rostering problems that we can 'model and run'. ",Reformulating Global Grammar Constraints " We propose a new family of constraints which combine together lexicographical ordering constraints for symmetry breaking with other common global constraints. We give a general purpose propagator for this family of constraints, and show how to improve its complexity by exploiting properties of the included global constraints. ",Combining Symmetry Breaking and Global Constraints " We present an online method for estimating the cost of solving SAT problems. Modern SAT solvers present several challenges to estimate search cost including non-chronological backtracking, learning and restarts. Our method uses a linear model trained on data gathered at the start of search. We show the effectiveness of this method using random and structured problems. We demonstrate that predictions made in early restarts can be used to improve later predictions. We also show that we can use such cost estimations to select a solver from a portfolio. ",Online Estimation of SAT Solving Runtime " In this work, we deal with the question of modeling programming exercises for novices pointing to an e-learning scenario. Our purpose is to identify basic requirements, raise some key questions and propose potential answers from a conceptual perspective. Presented as a general picture, we hypothetically situate our work in a general context where e-learning instructional material needs to be adapted to form part of an introductory Computer Science (CS) e-learning course at the CS1-level. Meant is a potential course which aims at improving novices skills and knowledge on the essentials of programming by using e-learning based approaches in connection (at least conceptually) with a general host framework like Activemath (www.activemath.org). Our elaboration covers contextual and, particularly, cognitive elements preparing the terrain for eventual research stages in a derived project, as indicated. We concentrate our main efforts on reasoning mechanisms about exercise complexity that can eventually offer tool support for the task of exercise authoring. We base our requirements analysis on our own perception of the exercise subsystem provided by Activemath especially within the domain reasoner area. We enrich the analysis by bringing to the discussion several relevant contextual elements from the CS1 courses, its definition and implementation. Concerning cognitive models and exercises, we build upon the principles of Bloom's Taxonomy as a relatively standardized basis and use them as a framework for study and analysis of complexity in basic programming exercises. Our analysis includes requirements for the domain reasoner which are necessary for the exercise analysis. We propose for such a purpose a three-layered conceptual model considering exercise evaluation, programming and metaprogramming. ",On Requirements for Programming Exercises from an E-learning Perspective " Successful management of emotional stimuli is a pivotal issue concerning Affective Computing (AC) and the related research. As a subfield of Artificial Intelligence, AC is concerned not only with the design of computer systems and the accompanying hardware that can recognize, interpret, and process human emotions, but also with the development of systems that can trigger human emotional response in an ordered and controlled manner. This requires the maximum attainable precision and efficiency in the extraction of data from emotionally annotated databases While these databases do use keywords or tags for description of the semantic content, they do not provide either the necessary flexibility or leverage needed to efficiently extract the pertinent emotional content. Therefore, to this extent we propose an introduction of ontologies as a new paradigm for description of emotionally annotated data. The ability to select and sequence data based on their semantic attributes is vital for any study involving metadata, semantics and ontological sorting like the Semantic Web or the Social Semantic Desktop, and the approach described in the paper facilitates reuse in these areas as well. ",Tagging multimedia stimuli with ontologies " To model combinatorial decision problems involving uncertainty and probability, we introduce scenario based stochastic constraint programming. Stochastic constraint programs contain both decision variables, which we can set, and stochastic variables, which follow a discrete probability distribution. We provide a semantics for stochastic constraint programs based on scenario trees. Using this semantics, we can compile stochastic constraint programs down into conventional (non-stochastic) constraint programs. This allows us to exploit the full power of existing constraint solvers. We have implemented this framework for decision making under uncertainty in stochastic OPL, a language which is based on the OPL constraint modelling language [Hentenryck et al., 1999]. To illustrate the potential of this framework, we model a wide range of problems in areas as diverse as portfolio diversification, agricultural planning and production/inventory management. ",Stochastic Constraint Programming: A Scenario-Based Approach " To model combinatorial decision problems involving uncertainty and probability, we introduce stochastic constraint programming. Stochastic constraint programs contain both decision variables (which we can set) and stochastic variables (which follow a probability distribution). They combine together the best features of traditional constraint satisfaction, stochastic integer programming, and stochastic satisfiability. We give a semantics for stochastic constraint programs, and propose a number of complete algorithms and approximation procedures. Finally, we discuss a number of extensions of stochastic constraint programming to relax various assumptions like the independence between stochastic variables, and compare with other approaches for decision making under uncertainty. ",Stochastic Constraint Programming " Often user interfaces of theorem proving systems focus on assisting particularly trained and skilled users, i.e., proof experts. As a result, the systems are difficult to use for non-expert users. This paper describes a paper and pencil HCI experiment, in which (non-expert) students were asked to make suggestions for a GUI for an interactive system for mathematical proofs. They had to explain the usage of the GUI by applying it to construct a proof sketch for a given theorem. The evaluation of the experiment provides insights for the interaction design for non-expert users and the needs and wants of this user group. ",Designing a GUI for Proofs - Evaluation of an HCI Experiment " The Ouroboros Model is a new conceptual proposal for an algorithmic structure for efficient data processing in living beings as well as for artificial agents. Its central feature is a general repetitive loop where one iteration cycle sets the stage for the next. Sensory input activates data structures (schemata) with similar constituents encountered before, thus expectations are kindled. This corresponds to the highlighting of empty slots in the selected schema, and these expectations are compared with the actually encountered input. Depending on the outcome of this consumption analysis different next steps like search for further data or a reset, i.e. a new attempt employing another schema, are triggered. Monitoring of the whole process, and in particular of the flow of activation directed by the consumption analysis, yields valuable feedback for the optimum allocation of attention and resources including the selective establishment of useful new memory entries. ",Flow of Activity in the Ouroboros Model " In a certain number of situations, human cognitive functioning is difficult to represent with classical artificial intelligence structures. Such a difficulty arises in the polyneuropathy diagnosis which is based on the spatial distribution, along the nerve fibres, of lesions, together with the synthesis of several partial diagnoses. Faced with this problem while building up an expert system (NEUROP), we developed a heterogeneous knowledge representation associating a finite automaton with first order logic. A number of knowledge representation problems raised by the electromyography test features are examined in this study and the expert system architecture allowing such a knowledge modeling are laid out. ","Heterogeneous knowledge representation using a finite automaton and first order logic: a case study in electromyography" " In this paper a new dynamic subsumption technique for Boolean CNF formulae is proposed. It exploits simple and sufficient conditions to detect during conflict analysis, clauses from the original formula that can be reduced by subsumption. During the learnt clause derivation, and at each step of the resolution process, we simply check for backward subsumption between the current resolvent and clauses from the original formula and encoded in the implication graph. Our approach give rise to a strong and dynamic simplification technique that exploits learning to eliminate literals from the original clauses. Experimental results show that the integration of our dynamic subsumption approach within the state-of-the-art SAT solvers Minisat and Rsat achieves interesting improvements particularly on crafted instances. ",Learning for Dynamic subsumption " Today, science have a powerful tool for the description of reality - the numbers. However, the concept of number was not immediately, lets try to trace the evolution of the concept. The numbers emerged as the need for accurate estimates of the amount in order to permit a comparison of some objects. So if you see to it how many times a day a person uses the numbers and compare, it becomes evident that the comparison is used much more frequently. However, the comparison is not possible without two opposite basic standards. Thus, to introduce the concept of comparison, must have two opposing standards, in turn, the operation of comparison is necessary to introduce the concept of number. Arguably, the scientific description of reality is impossible without the concept of opposites. In this paper analyzes the concept of opposites, as the basis for the introduction of the principle of development. ",Principle of development " Social Network Analysis (SNA) tries to understand and exploit the key features of social networks in order to manage their life cycle and predict their evolution. Increasingly popular web 2.0 sites are forming huge social network. Classical methods from social network analysis (SNA) have been applied to such online networks. In this paper, we propose leveraging semantic web technologies to merge and exploit the best features of each domain. We present how to facilitate and enhance the analysis of online social networks, exploiting the power of semantic social network analysis. ",Semantic Social Network Analysis " We describe a variant of resolution rule of proof and show that it is complete for stable semantics of logic programs. We show applications of this result. ",Guarded resolution for answer set programming " A fuzzy mnesor space is a semimodule over the positive real numbers. It can be used as theoretical framework for fuzzy sets. Hence we can prove a great number of properties for fuzzy sets without refering to the membership functions. ",Fuzzy Mnesors " We apply proof-theoretic techniques in answer Set Programming. The main results include: 1. A characterization of continuity properties of Gelfond-Lifschitz operator for logic program. 2. A propositional characterization of stable models of logic programs (without referring to loop formulas. ",An Application of Proof-Theory in Answer Set Programming " We show that some common and important global constraints like ALL-DIFFERENT and GCC can be decomposed into simple arithmetic constraints on which we achieve bound or range consistency, and in some cases even greater pruning. These decompositions can be easily added to new solvers. They also provide other constraints with access to the state of the propagator by sharing of variables. Such sharing can be used to improve propagation between constraints. We report experiments with our decomposition in a pseudo-Boolean solver. ","Decompositions of All Different, Global Cardinality and Related Constraints" " To model combinatorial decision problems involving uncertainty and probability, we extend the stochastic constraint programming framework proposed in [Walsh, 2002] along a number of important dimensions (e.g. to multiple chance constraints and to a range of new objectives). We also provide a new (but equivalent) semantics based on scenarios. Using this semantics, we can compile stochastic constraint programs down into conventional (nonstochastic) constraint programs. This allows us to exploit the full power of existing constraint solvers. We have implemented this framework for decision making under uncertainty in stochastic OPL, a language which is based on the OPL constraint modelling language [Hentenryck et al., 1999]. To illustrate the potential of this framework, we model a wide range of problems in areas as diverse as finance, agriculture and production. ",Scenario-based Stochastic Constraint Programming " Many real life optimization problems contain both hard and soft constraints, as well as qualitative conditional preferences. However, there is no single formalism to specify all three kinds of information. We therefore propose a framework, based on both CP-nets and soft constraints, that handles both hard and soft constraints as well as conditional preferences efficiently and uniformly. We study the complexity of testing the consistency of preference statements, and show how soft constraints can faithfully approximate the semantics of conditional preference statements whilst improving the computational complexity ","Reasoning about soft constraints and conditional preferences: complexity results and approximation techniques" " We identify a new and important global (or non-binary) constraint. This constraint ensures that the values taken by two vectors of variables, when viewed as multisets, are ordered. This constraint is useful for a number of different applications including breaking symmetry and fuzzy constraint satisfaction. We propose and implement an efficient linear time algorithm for enforcing generalised arc consistency on such a multiset ordering constraint. Experimental results on several problem domains show considerable promise. ",Multiset Ordering Constraints " We relate tag clouds to other forms of visualization, including planar or reduced dimensionality mapping, and Kohonen self-organizing maps. Using a modified tag cloud visualization, we incorporate other information into it, including text sequence and most pertinent words. Our notion of word pertinence goes beyond just word frequency and instead takes a word in a mathematical sense as located at the average of all of its pairwise relationships. We capture semantics through context, taken as all pairwise relationships. Our domain of application is that of filmscript analysis. The analysis of filmscripts, always important for cinema, is experiencing a major gain in importance in the context of television. Our objective in this work is to visualize the semantics of filmscript, and beyond filmscript any other partially structured, time-ordered, sequence of text segments. In particular we develop an innovative approach to plot characterization. ",Tag Clouds for Displaying Semantics: The Case of Filmscripts " The paper proposes an analysis on some existent ontologies, in order to point out ways to resolve semantic heterogeneity in information systems. Authors are highlighting the tasks in a Knowledge Acquisiton System and identifying aspects related to the addition of new information to an intelligent system. A solution is proposed, as a combination of ontology reasoning services and natural languages generation. A multi-agent system will be conceived with an extractor agent, a reasoner agent and a competence management agent. ",Considerations on Construction Ontologies " We have been developing a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities detected on video frames. The output of our system is a set of recognised long-term activities, which are pre-defined temporal combinations of short-term activities. The constraints on the short-term activities that, if satisfied, lead to the recognition of a long-term activity, are expressed using a dialect of the Event Calculus. We illustrate the expressiveness of the dialect by showing the representation of several typical complex activities. Furthermore, we present a detailed evaluation of the system through experimentation on a benchmark dataset of surveillance videos. ",A Logic Programming Approach to Activity Recognition " People have to make important decisions within a time frame. Hence, it is imperative to employ means or strategy to aid effective decision making. Consequently, Economic Intelligence (EI) has emerged as a field to aid strategic and timely decision making in an organization. In the course of attaining this goal: it is indispensable to be more optimistic towards provision for conservation of intellectual resource invested into the process of decision making. This intellectual resource is nothing else but the knowledge of the actors as well as that of the various processes for effecting decision making. Knowledge has been recognized as a strategic economic resource for enhancing productivity and a key for innovation in any organization or community. Thus, its adequate management with cognizance of its temporal properties is highly indispensable. Temporal properties of knowledge refer to the date and time (known as timestamp) such knowledge is created as well as the duration or interval between related knowledge. This paper focuses on the needs for a user-centered knowledge management approach as well as exploitation of associated temporal properties. Our perspective of knowledge is with respect to decision-problems projects in EI. Our hypothesis is that the possibility of reasoning about temporal properties in exploitation of knowledge in EI projects should foster timely decision making through generation of useful inferences from available and reusable knowledge for a new project. ","Knowledge Management in Economic Intelligence with Reasoning on Temporal Attributes" " I discuss (ontologies_and_ontological_knowledge_bases / formal_methods_and_theories) duality and its category theory extensions as a step toward a solution to Knowledge-Based Systems Theory. In particular I focus on the example of the design of elements of ontologies and ontological knowledge bases of next three electronic courses: Foundations of Research Activities, Virtual Modeling of Complex Systems and Introduction to String Theory. ",Toward a Category Theory Design of Ontological Knowledge Bases " Mnesors are defined as elements of a semimodule over the min-plus integers. This two-sorted structure is able to merge graduation properties of vectors and idempotent properties of boolean numbers, which makes it appropriate for hybrid systems. We apply it to the control of an inverted pendulum and design a full logical controller, that is, without the usual algebra of real numbers. ",Mnesors for automatic control " We consider the following sequential decision problem. Given a set of items of unknown utility, we need to select one of as high a utility as possible (``the selection problem''). Measurements (possibly noisy) of item values prior to selection are allowed, at a known cost. The goal is to optimize the overall sequential decision process of measurements and selection. Value of information (VOI) is a well-known scheme for selecting measurements, but the intractability of the problem typically leads to using myopic VOI estimates. In the selection problem, myopic VOI frequently badly underestimates the value of information, leading to inferior sensing plans. We relax the strict myopic assumption into a scheme we term semi-myopic, providing a spectrum of methods that can improve the performance of sensing plans. In particular, we propose the efficiently computable method of ``blinkered'' VOI, and examine theoretical bounds for special cases. Empirical evaluation of ``blinkered'' VOI in the selection problem with normally distributed item values shows that is performs much better than pure myopic VOI. ",Semi-Myopic Sensing Plans for Value Optimization " There are several well-known justifications for conditioning as the appropriate method for updating a single probability measure, given an observation. However, there is a significant body of work arguing for sets of probability measures, rather than single measures, as a more realistic model of uncertainty. Conditioning still makes sense in this context--we can simply condition each measure in the set individually, then combine the results--and, indeed, it seems to be the preferred updating procedure in the literature. But how justified is conditioning in this richer setting? Here we show, by considering an axiomatic account of conditioning given by van Fraassen, that the single-measure and sets-of-measures cases are very different. We show that van Fraassen's axiomatization for the former case is nowhere near sufficient for updating sets of measures. We give a considerably longer (and not as compelling) list of axioms that together force conditioning in this setting, and describe other update methods that are allowed once any of these axioms is dropped. ",Updating Sets of Probabilities " This paper presents a novel two-stage flexible dynamic decision support based optimal threat evaluation and defensive resource scheduling algorithm for multi-target air-borne threats. The algorithm provides flexibility and optimality by swapping between two objective functions, i.e. the preferential and subtractive defense strategies as and when required. To further enhance the solution quality, it outlines and divides the critical parameters used in Threat Evaluation and Weapon Assignment (TEWA) into three broad categories (Triggering, Scheduling and Ranking parameters). Proposed algorithm uses a variant of many-to-many Stable Marriage Algorithm (SMA) to solve Threat Evaluation (TE) and Weapon Assignment (WA) problem. In TE stage, Threat Ranking and Threat-Asset pairing is done. Stage two is based on a new flexible dynamic weapon scheduling algorithm, allowing multiple engagements using shoot-look-shoot strategy, to compute near-optimal solution for a range of scenarios. Analysis part of this paper presents the strengths and weaknesses of the proposed algorithm over an alternative greedy algorithm as applied to different offline scenarios. ","A Novel Two-Stage Dynamic Decision Support based Optimal Threat Evaluation and Defensive Resource Scheduling Algorithm for Multi Air-borne threats" " Martin and Osswald \cite{Martin07} have recently proposed many generalizations of combination rules on quantitative beliefs in order to manage the conflict and to consider the specificity of the responses of the experts. Since the experts express themselves usually in natural language with linguistic labels, Smarandache and Dezert \cite{Li07} have introduced a mathematical framework for dealing directly also with qualitative beliefs. In this paper we recall some element of our previous works and propose the new combination rules, developed for the fusion of both qualitative or quantitative beliefs. ",General combination rules for qualitative and quantitative beliefs " Surveillance control and reporting (SCR) system for air threats play an important role in the defense of a country. SCR system corresponds to air and ground situation management/processing along with information fusion, communication, coordination, simulation and other critical defense oriented tasks. Threat Evaluation and Weapon Assignment (TEWA) sits at the core of SCR system. In such a system, maximal or near maximal utilization of constrained resources is of extreme importance. Manual TEWA systems cannot provide optimality because of different limitations e.g.surface to air missile (SAM) can fire from a distance of 5Km, but manual TEWA systems are constrained by human vision range and other constraints. Current TEWA systems usually work on target-by-target basis using some type of greedy algorithm thus affecting the optimality of the solution and failing in multi-target scenario. his paper relates to a novel two-staged flexible dynamic decision support based optimal threat evaluation and weapon assignment algorithm for multi-target air-borne threats. ","A Novel Two-Staged Decision Support based Threat Evaluation and Weapon Assignment Algorithm, Asset-based Dynamic Weapon Scheduling using Artificial Intelligence Techinques" " Collective graphical models exploit inter-instance associative dependence to output more accurate labelings. However existing models support very limited kind of associativity which restricts accuracy gains. This paper makes two major contributions. First, we propose a general collective inference framework that biases data instances to agree on a set of {\em properties} of their labelings. Agreement is encouraged through symmetric clique potentials. We show that rich properties leads to bigger gains, and present a systematic inference procedure for a large class of such properties. The procedure performs message passing on the cluster graph, where property-aware messages are computed with cluster specific algorithms. This provides an inference-only solution for domain adaptation. Our experiments on bibliographic information extraction illustrate significant test error reduction over unseen domains. Our second major contribution consists of algorithms for computing outgoing messages from clique clusters with symmetric clique potentials. Our algorithms are exact for arbitrary symmetric potentials on binary labels and for max-like and majority-like potentials on multiple labels. For majority potentials, we also provide an efficient Lagrangian Relaxation based algorithm that compares favorably with the exact algorithm. We present a 13/15-approximation algorithm for the NP-hard Potts potential, with runtime sub-quadratic in the clique size. In contrast, the best known previous guarantee for graphs with Potts potentials is only 1/2. We empirically show that our method for Potts potentials is an order of magnitude faster than the best alternatives, and our Lagrangian Relaxation based algorithm for majority potentials beats the best applicable heuristic -- ICM. ",Generalized Collective Inference with Symmetric Clique Potentials " This research report presents an extension of Cumulative of Choco constraint solver, which is useful to encode over-constrained cumulative problems. This new global constraint uses sweep and task interval violation-based algorithms. ",The Soft Cumulative Constraint " In this paper, we propose a first-order ontology for generalized stratified order structure. We then classify the models of the theory using model-theoretic techniques. An ontology mapping from this ontology to the core theory of Process Specification Language is also discussed. ",Modelling Concurrent Behaviors in the Process Specification Language " The article presents a study of rather simple local search heuristics for the single machine total weighted tardiness problem (SMTWTP), namely hillclimbing and Variable Neighborhood Search. In particular, we revisit these approaches for the SMTWTP as there appears to be a lack of appropriate/challenging benchmark instances in this case. The obtained results are impressive indeed. Only few instances remain unsolved, and even those are approximated within 1% of the optimal/best known solutions. Our experiments support the claim that metaheuristics for the SMTWTP are very likely to lead to good results, and that, before refining search strategies, more work must be done with regard to the proposition of benchmark data. Some recommendations for the construction of such data sets are derived from our investigations. ","The Single Machine Total Weighted Tardiness Problem - Is it (for Metaheuristics) a Solved Problem ?" " The article describes the proposition and application of a local search metaheuristic for multi-objective optimization problems. It is based on two main principles of heuristic search, intensification through variable neighborhoods, and diversification through perturbations and successive iterations in favorable regions of the search space. The concept is successfully tested on permutation flow shop scheduling problems under multiple objectives and compared to other local search approaches. While the obtained results are encouraging in terms of their quality, another positive attribute of the approach is its simplicity as it does require the setting of only very few parameters. ","Improvements for multi-objective flow shop scheduling by Pareto Iterated Local Search" " This paper discusses ""computational"" systems capable of ""computing"" functions not computable by predefined Turing machines if the systems are not isolated from their environment. Roughly speaking, these systems can change their finite descriptions by interacting with their environment. ",Beyond Turing Machines " I propose that pattern recognition, memorization and processing are key concepts that can be a principle set for the theoretical modeling of the mind function. Most of the questions about the mind functioning can be answered by a descriptive modeling and definitions from these principles. An understandable consciousness definition can be drawn based on the assumption that a pattern recognition system can recognize its own patterns of activity. The principles, descriptive modeling and definitions can be a basis for theoretical and applied research on cognitive sciences, particularly at artificial intelligence studies. ",Pattern Recognition Theory of Mind " The report gives an overview about activities on the topic Semantic Web. It has been released as technical report for the project ""KTweb -- Connecting Knowledge Technologies Communities"" in 2003. ",Fact Sheet on Semantic Web " Restart strategies are an important factor in the performance of conflict-driven Davis Putnam style SAT solvers. Selecting a good restart strategy for a problem instance can enhance the performance of a solver. Inspired by recent success applying machine learning techniques to predict the runtime of SAT solvers, we present a method which uses machine learning to boost solver performance through a smart selection of the restart strategy. Based on easy to compute features, we train both a satisfiability classifier and runtime models. We use these models to choose between restart strategies. We present experimental results comparing this technique with the most commonly used restart strategies. Our results demonstrate that machine learning is effective in improving solver performance. ",Restart Strategy Selection using Machine Learning Techniques " We present two different methods for estimating the cost of solving SAT problems. The methods focus on the online behaviour of the backtracking solver, as well as the structure of the problem. Modern SAT solvers present several challenges to estimate search cost including coping with nonchronological backtracking, learning and restarts. Our first method adapt an existing algorithm for estimating the size of a search tree to deal with these challenges. We then suggest a second method that uses a linear model trained on data gathered online at the start of search. We compare the effectiveness of these two methods using random and structured problems. We also demonstrate that predictions made in early restarts can be used to improve later predictions. We conclude by showing that the cost of solving a set of problems can be reduced by selecting a solver from a portfolio based on such cost estimations. ",Online Search Cost Estimation for SAT Solvers " Classification is the basis of cognition. Unlike other solutions, this study approaches it from the view of outliers. We present an expanding algorithm to detect outliers in univariate datasets, together with the underlying foundation. The expanding algorithm runs in a holistic way, making it a rather robust solution. Synthetic and real data experiments show its power. Furthermore, an application for multi-class problems leads to the introduction of the oscillator algorithm. The corresponding result implies the potential wide use of the expanding algorithm. ",On Classification from Outlier View " We consider a sequence of repeated interactions between an agent and an environment. Uncertainty about the environment is captured by a probability distribution over a space of hypotheses, which includes all computable functions. Given a utility function, we can evaluate the expected utility of any computational policy for interaction with the environment. After making some plausible assumptions (and maybe one not-so-plausible assumption), we show that if the utility function is unbounded, then the expected utility of any policy is undefined. ",Convergence of Expected Utility for Universal AI " This study describes application of some approximate reasoning methods to analysis of hydrocyclone performance. In this manner, using a combining of Self Organizing Map (SOM), Neuro-Fuzzy Inference System (NFIS)-SONFIS- and Rough Set Theory (RST)-SORST-crisp and fuzzy granules are obtained. Balancing of crisp granules and non-crisp granules can be implemented in close-open iteration. Using different criteria and based on granulation level balance point (interval) or a pseudo-balance point is estimated. Validation of the proposed methods, on the data set of the hydrocyclone is rendered. ",Knowledge Discovery of Hydrocyclone s Circuit Based on SONFIS and SORST " In this paper we introduce two new DSm fusion conditioning rules with example, and as a generalization of them a class of DSm fusion conditioning rules, and then extend them to a class of DSm conditioning rules. ",A Class of DSm Conditional Rules " When implementing a propagator for a constraint, one must decide about variants: When implementing min, should one also implement max? Should one implement linear constraints both with unit and non-unit coefficients? Constraint variants are ubiquitous: implementing them requires considerable (if not prohibitive) effort and decreases maintainability, but will deliver better performance than resorting to constraint decomposition. This paper shows how to use views to derive perfect propagator variants. A model for views and derived propagators is introduced. Derived propagators are proved to be indeed perfect in that they inherit essential properties such as correctness and domain and bounds consistency. Techniques for systematically deriving propagators such as transformation, generalization, specialization, and type conversion are developed. The paper introduces an implementation architecture for views that is independent of the underlying constraint programming system. A detailed evaluation of views implemented in Gecode shows that derived propagators are efficient and that views often incur no overhead. Without views, Gecode would either require 180 000 rather than 40 000 lines of propagator code, or would lack many efficient propagator variants. Compared to 8 000 lines of code for views, the reduction in code for propagators yields a 1750% return on investment. ",View-based Propagator Derivation " The explorative mind-map is a dynamic framework, that emerges automatically from the input, it gets. It is unlike a verificative modeling system where existing (human) thoughts are placed and connected together. In this regard, explorative mind-maps change their size continuously, being adaptive with connectionist cells inside; mind-maps process data input incrementally and offer lots of possibilities to interact with the user through an appropriate communication interface. With respect to a cognitive motivated situation like a conversation between partners, mind-maps become interesting as they are able to process stimulating signals whenever they occur. If these signals are close to an own understanding of the world, then the conversational partner becomes automatically more trustful than if the signals do not or less match the own knowledge scheme. In this (position) paper, we therefore motivate explorative mind-maps as a cognitive engine and propose these as a decision support engine to foster trust. ",A Cognitive Mind-map Framework to Foster Trust " To capture the uncertainty of information or knowledge in information systems, various information granulations, also known as knowledge granulations, have been proposed. Recently, several axiomatic definitions of information granulation have been introduced. In this paper, we try to improve these axiomatic definitions and give a universal construction of information granulation by relating information granulations with a class of functions of multiple variables. We show that the improved axiomatic definition has some concrete information granulations in the literature as instances. ",An improved axiomatic definition of information granulation " Current research on qualitative spatial representation and reasoning mainly focuses on one single aspect of space. In real world applications, however, multiple spatial aspects are often involved simultaneously. This paper investigates problems arising in reasoning with combined topological and directional information. We use the RCC8 algebra and the Rectangle Algebra (RA) for expressing topological and directional information respectively. We give examples to show that the bipath-consistency algorithm BIPATH is incomplete for solving even basic RCC8 and RA constraints. If topological constraints are taken from some maximal tractable subclasses of RCC8, and directional constraints are taken from a subalgebra, termed DIR49, of RA, then we show that BIPATH is able to separate topological constraints from directional ones. This means, given a set of hybrid topological and directional constraints from the above subclasses of RCC8 and RA, we can transfer the joint satisfaction problem in polynomial time to two independent satisfaction problems in RCC8 and RA. For general RA constraints, we give a method to compute solutions that satisfy all topological constraints and approximately satisfy each RA constraint to any prescribed precision. ",Reasoning with Topological and Directional Spatial Information " Direction relations between extended spatial objects are important commonsense knowledge. Recently, Goyal and Egenhofer proposed a formal model, known as Cardinal Direction Calculus (CDC), for representing direction relations between connected plane regions. CDC is perhaps the most expressive qualitative calculus for directional information, and has attracted increasing interest from areas such as artificial intelligence, geographical information science, and image retrieval. Given a network of CDC constraints, the consistency problem is deciding if the network is realizable by connected regions in the real plane. This paper provides a cubic algorithm for checking consistency of basic CDC constraint networks, and proves that reasoning with CDC is in general an NP-Complete problem. For a consistent network of basic CDC constraints, our algorithm also returns a 'canonical' solution in cubic time. This cubic algorithm is also adapted to cope with cardinal directions between possibly disconnected regions, in which case currently the best algorithm is of time complexity O(n^5). ",Reasoning about Cardinal Directions between Extended Objects " In this paper, we address the problem of generating preferred plans by combining the procedural control knowledge specified by Hierarchical Task Networks (HTNs) with rich qualitative user preferences. The outcome of our work is a language for specifyin user preferences, tailored to HTN planning, together with a provably optimal preference-based planner, HTNPLAN, that is implemented as an extension of SHOP2. To compute preferred plans, we propose an approach based on forward-chaining heuristic search. Our heuristic uses an admissible evaluation function measuring the satisfaction of preferences over partial plans. Our empirical evaluation demonstrates the effectiveness of our HTNPLAN heuristics. We prove our approach sound and optimal with respect to the plans it generates by appealing to a situation calculus semantics of our preference language and of HTN planning. While our implementation builds on SHOP2, the language and techniques proposed here are relevant to a broad range of HTN planners. ",On Planning with Preferences in HTN " We study the notion of informedness in a client-consultant setting. Using a software simulator, we examine the extent to which it pays off for consultants to provide their clients with advice that is well-informed, or with advice that is merely meant to appear to be well-informed. The latter strategy is beneficial in that it costs less resources to keep up-to-date, but carries the risk of a decreased reputation if the clients discover the low level of informedness of the consultant. Our experimental results indicate that under different circumstances, different strategies yield the optimal results (net profit) for the consultants. ",Assessing the Impact of Informedness on a Consultant's Profit " We describe in this article a multiagent urban traffic simulation, as we believe individual-based modeling is necessary to encompass the complex influence the actions of an individual vehicle can have on the overall flow of vehicles. We first describe how we build a graph description of the network from purely geometric data, ESRI shapefiles. We then explain how we include traffic related data to this graph. We go on after that with the model of the vehicle agents: origin and destination, driving behavior, multiple lanes, crossroads, and interactions with the other vehicles in day-to-day, ?ordinary? traffic. We conclude with the presentation of the resulting simulation of this model on the Rouen agglomeration. ",A multiagent urban traffic simulation Part I: dealing with the ordinary " 2007 was the first international congress on the ?square of oppositions?. A first attempt to structure debate using n-opposition theory was presented along with the results of a first experiment on the web. Our proposal for this paper is to define relations between arguments through a structure of opposition (square of oppositions is one structure of opposition). We will be trying to answer the following questions: How to organize debates on the web 2.0? How to structure them in a logical way? What is the role of n-opposition theory, in this context? We present in this paper results of three experiments (Betapolitique 2007, ECAP 2008, Intermed 2008). ",n-Opposition theory to structure debates " We propose Interactive Differential Evolution (IDE) based on paired comparisons for reducing user fatigue and evaluate its convergence speed in comparison with Interactive Genetic Algorithms (IGA) and tournament IGA. User interface and convergence performance are two big keys for reducing Interactive Evolutionary Computation (IEC) user fatigue. Unlike IGA and conventional IDE, users of the proposed IDE and tournament IGA do not need to compare whole individuals each other but compare pairs of individuals, which largely decreases user fatigue. In this paper, we design a pseudo-IEC user and evaluate another factor, IEC convergence performance, using IEC simulators and show that our proposed IDE converges significantly faster than IGA and tournament IGA, i.e. our proposed one is superior to others from both user interface and convergence performance points of view. ",Paired Comparisons-based Interactive Differential Evolution " This paper describes application of information granulation theory, on the back analysis of Jeffrey mine southeast wall Quebec. In this manner, using a combining of Self Organizing Map (SOM) and rough set theory (RST), crisp and rough granules are obtained. Balancing of crisp granules and sub rough granules is rendered in close-open iteration. Combining of hard and soft computing, namely finite difference method (FDM) and computational intelligence and taking in to account missing information are two main benefits of the proposed method. As a practical example, reverse analysis on the failure of the southeast wall Jeffrey mine is accomplished. ",Back analysis based on SOM-RST system " Fault diagnosis has become a very important area of research during the last decade due to the advancement of mechanical and electrical systems in industries. The automobile is a crucial field where fault diagnosis is given a special attention. Due to the increasing complexity and newly added features in vehicles, a comprehensive study has to be performed in order to achieve an appropriate diagnosis model. A diagnosis system is capable of identifying the faults of a system by investigating the observable effects (or symptoms). The system categorizes the fault into a diagnosis class and identifies a probable cause based on the supplied fault symptoms. Fault categorization and identification are done using similarity matching techniques. The development of diagnosis classes is done by making use of previous experience, knowledge or information within an application area. The necessary information used may come from several sources of knowledge, such as from system analysis. In this paper similarity matching techniques for fault diagnosis in automotive infotainment applications are discussed. ","Similarity Matching Techniques for Fault Diagnosis in Automotive Infotainment Electronics" " Diverse recommendation techniques have been already proposed and encapsulated into several e-business applications, aiming to perform a more accurate evaluation of the existing information and accordingly augment the assistance provided to the users involved. This paper reports on the development and integration of a recommendation module in an agent-based transportation transactions management system. The module is built according to a novel hybrid recommendation technique, which combines the advantages of collaborative filtering and knowledge-based approaches. The proposed technique and supporting module assist customers in considering in detail alternative transportation transactions that satisfy their requests, as well as in evaluating completed transactions. The related services are invoked through a software agent that constructs the appropriate knowledge rules and performs a synthesis of the recommendation policy. ","Performing Hybrid Recommendation in Intermodal Transportation-the FTMarket System's Recommendation Module" " We study decompositions of NVALUE, a global constraint that can be used to model a wide range of problems where values need to be counted. Whilst decomposition typically hinders propagation, we identify one decomposition that maintains a global view as enforcing bound consistency on the decomposition achieves bound consistency on the original global NVALUE constraint. Such decompositions offer the prospect for advanced solving techniques like nogood learning and impact based branching heuristics. They may also help SAT and IP solvers take advantage of the propagation of global constraints. ",Decomposition of the NVALUE constraint " Symmetry is an important feature of many constraint programs. We show that any symmetry acting on a set of symmetry breaking constraints can be used to break symmetry. Different symmetries pick out different solutions in each symmetry class. We use these observations in two methods for eliminating symmetry from a problem. These methods are designed to have many of the advantages of symmetry breaking methods that post static symmetry breaking constraint without some of the disadvantages. In particular, the two methods prune the search space using fast and efficient propagation of posted constraints, whilst reducing the conflict between symmetry breaking and branching heuristics. Experimental results show that the two methods perform well on some standard benchmarks. ",Symmetries of Symmetry Breaking Constraints " Fuzzy constraints are a popular approach to handle preferences and over-constrained problems in scenarios where one needs to be cautious, such as in medical or space applications. We consider here fuzzy constraint problems where some of the preferences may be missing. This models, for example, settings where agents are distributed and have privacy issues, or where there is an ongoing preference elicitation process. In this setting, we study how to find a solution which is optimal irrespective of the missing preferences. In the process of finding such a solution, we may elicit preferences from the user if necessary. However, our goal is to ask the user as little as possible. We define a combined solving and preference elicitation scheme with a large number of different instantiations, each corresponding to a concrete algorithm which we compare experimentally. We compute both the number of elicited preferences and the ""user effort"", which may be larger, as it contains all the preference values the user has to compute to be able to respond to the elicitation requests. While the number of elicited preferences is important when the concern is to communicate as little information as possible, the user effort measures also the hidden work the user has to do to be able to communicate the elicited preferences. Our experimental results show that some of our algorithms are very good at finding a necessarily optimal solution while asking the user for only a very small fraction of the missing preferences. The user effort is also very small for the best algorithms. Finally, we test these algorithms on hard constraint problems with possibly missing constraints, where the aim is to find feasible solutions irrespective of the missing constraints. ","Elicitation strategies for fuzzy constraint problems with missing preferences: algorithms and experimental studies" " We propose new filtering algorithms for the SEQUENCE constraint and some extensions of the SEQUENCE constraint based on network flows. We enforce domain consistency on the SEQUENCE constraint in $O(n^2)$ time down a branch of the search tree. This improves upon the best existing domain consistency algorithm by a factor of $O(\log n)$. The flows used in these algorithms are derived from a linear program. Some of them differ from the flows used to propagate global constraints like GCC since the domains of the variables are encoded as costs on the edges rather than capacities. Such flows are efficient for maintaining bounds consistency over large domains and may be useful for other global constraints. ",Flow-Based Propagators for the SEQUENCE and Related Global Constraints " We introduce the weighted CFG constraint and propose a propagation algorithm that enforces domain consistency in $O(n^3|G|)$ time. We show that this algorithm can be decomposed into a set of primitive arithmetic constraints without hindering propagation. ",The Weighted CFG Constraint " Many models in natural and social sciences are comprised of sets of inter-acting entities whose intensity of interaction decreases with distance. This often leads to structures of interest in these models composed of dense packs of entities. Fast Multipole Methods are a family of methods developed to help with the calculation of a number of computable models such as described above. We propose a method that builds upon FMM to detect and model the dense structures of these systems. ",Building upon Fast Multipole Methods to Detect and Model Organizations " In Probabilistic Risk Management, risk is characterized by two quantities: the magnitude (or severity) of the adverse consequences that can potentially result from the given activity or action, and by the likelihood of occurrence of the given adverse consequences. But a risk seldom exists in isolation: chain of consequences must be examined, as the outcome of one risk can increase the likelihood of other risks. Systemic theory must complement classic PRM. Indeed these chains are composed of many different elements, all of which may have a critical importance at many different levels. Furthermore, when urban catastrophes are envisioned, space and time constraints are key determinants of the workings and dynamics of these chains of catastrophes: models must include a correct spatial topology of the studied risk. Finally, literature insists on the importance small events can have on the risk on a greater scale: urban risks management models belong to self-organized criticality theory. We chose multiagent systems to incorporate this property in our model: the behavior of an agent can transform the dynamics of important groups of them. ","A multiagent urban traffic simulation. Part II: dealing with the extraordinary" " Constrained Optimum Path (COP) problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP). We show that side constraints can easily be added in the model. Computational results show the significance of the approach. ","A Local Search Modeling for Constrained Optimum Paths Problems (Extended Abstract)" " Using constraint-based local search, we effectively model and efficiently solve the problem of balancing the traffic demands on portions of the European airspace while ensuring that their capacity constraints are satisfied. The traffic demand of a portion of airspace is the hourly number of flights planned to enter it, and its capacity is the upper bound on this number under which air-traffic controllers can work. Currently, the only form of demand-capacity balancing we allow is ground holding, that is the changing of the take-off times of not yet airborne flights. Experiments with projected European flight plans of the year 2030 show that already this first form of demand-capacity balancing is feasible without incurring too much total delay and that it can lead to a significantly better demand-capacity balance. ","Dynamic Demand-Capacity Balancing for Air Traffic Management Using Constraint-Based Local Search: First Results" " Stochastic local search (SLS) has been an active field of research in the last few years, with new techniques and procedures being developed at an astonishing rate. SLS has been traditionally associated with satisfiability solving, that is, finding a solution for a given problem instance, as its intrinsic nature does not address unsatisfiable problems. Unsatisfiable instances were therefore commonly solved using backtrack search solvers. For this reason, in the late 90s Selman, Kautz and McAllester proposed a challenge to use local search instead to prove unsatisfiability. More recently, two SLS solvers - Ranger and Gunsat - have been developed, which are able to prove unsatisfiability albeit being SLS solvers. In this paper, we first compare Ranger with Gunsat and then propose to improve Ranger performance using some of Gunsat's techniques, namely unit propagation look-ahead and extended resolution. ",On Improving Local Search for Unsatisfiability " This article introduces SatHyS (SAT HYbrid Solver), a novel hybrid approach for propositional satisfiability. It combines local search and conflict driven clause learning (CDCL) scheme. Each time the local search part reaches a local minimum, the CDCL is launched. For SAT problems it behaves like a tabu list, whereas for UNSAT ones, the CDCL part tries to focus on minimum unsatisfiable sub-formula (MUS). Experimental results show good performances on many classes of SAT instances from the last SAT competitions. ",Integrating Conflict Driven Clause Learning to Local Search " In this paper, we investigate the hybridization of constraint programming and local search techniques within a large neighbourhood search scheme for solving highly constrained nurse rostering problems. As identified by the research, a crucial part of the large neighbourhood search is the selection of the fragment (neighbourhood, i.e. the set of variables), to be relaxed and re-optimized iteratively. The success of the large neighbourhood search depends on the adequacy of this identified neighbourhood with regard to the problematic part of the solution assignment and the choice of the neighbourhood size. We investigate three strategies to choose the fragment of different sizes within the large neighbourhood search scheme. The first two strategies are tailored concerning the problem properties. The third strategy is more general, using the information of the cost from the soft constraint violations and their propagation as the indicator to choose the variables added into the fragment. The three strategies are analyzed and compared upon a benchmark nurse rostering problem. Promising results demonstrate the possibility of future work in the hybrid approach. ",A Constraint-directed Local Search Approach to Nurse Rostering Problems " This paper presents a new method and a constraint-based objective function to solve two problems related to the design of optical telecommunication networks, namely the Synchronous Optical Network Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network Design Problem (IDP). These network topology problems can be represented as a graph partitioning with capacity constraints as shown in previous works. We present here a new objective function and a new local search algorithm to solve these problems. Experiments conducted in Comet allow us to compare our method to previous ones and show that we obtain better results. ",Sonet Network Design Problems " We explore the use of the Cell Broadband Engine (Cell/BE for short) for combinatorial optimization applications: we present a parallel version of a constraint-based local search algorithm that has been implemented on a multiprocessor BladeCenter machine with twin Cell/BE processors (total of 16 SPUs per blade). This algorithm was chosen because it fits very well the Cell/BE architecture and requires neither shared memory nor communication between processors, while retaining a compact memory footprint. We study the performance on several large optimization benchmarks and show that this achieves mostly linear time speedups, even sometimes super-linear. This is possible because the parallel implementation might explore simultaneously different parts of the search space and therefore converge faster towards the best sub-space and thus towards a solution. Besides getting speedups, the resulting times exhibit a much smaller variance, which benefits applications where a timely reply is critical. ","Parallel local search for solving Constraint Problems on the Cell Broadband Engine (Preliminary Results)" " We explore the idea of using finite automata to implement new constraints for local search (this is already a successful technique in constraint-based global search). We show how it is possible to maintain incrementally the violations of a constraint and its decision variables from an automaton that describes a ground checker for that constraint. We establish the practicality of our approach idea on real-life personnel rostering problems, and show that it is competitive with the approach of [Pralong, 2007]. ",Toward an automaton Constraint for Local Search " LSCS is a satellite workshop of the international conference on principles and practice of Constraint Programming (CP), since 2004. It is devoted to local search techniques in constraint satisfaction, and focuses on all aspects of local search techniques, including: design and implementation of new algorithms, hybrid stochastic-systematic search, reactive search optimization, adaptive search, modeling for local-search, global constraints, flexibility and robustness, learning methods, and specific applications. ","Proceedings 6th International Workshop on Local Search Techniques in Constraint Satisfaction" " In this paper the behavior of three combinational rules for temporal/sequential attribute data fusion for target type estimation are analyzed. The comparative analysis is based on: Dempster's fusion rule proposed in Dempster-Shafer Theory; Proportional Conflict Redistribution rule no. 5 (PCR5), proposed in Dezert-Smarandache Theory and one alternative class fusion rule, connecting the combination rules for information fusion with particular fuzzy operators, focusing on the t-norm based Conjunctive rule as an analog of the ordinary conjunctive rule and t-conorm based Disjunctive rule as an analog of the ordinary disjunctive rule. The way how different t-conorms and t-norms functions within TCN fusion rule influence over target type estimation performance is studied and estimated. ",Tracking object's type changes with fuzzy based fusion rule " This paper proposes the application of particle swarm optimization (PSO) to the problem of finite element model (FEM) selection. This problem arises when a choice of the best model for a system has to be made from set of competing models, each developed a priori from engineering judgment. PSO is a population-based stochastic search algorithm inspired by the behaviour of biological entities in nature when they are foraging for resources. Each potentially correct model is represented as a particle that exhibits both individualistic and group behaviour. Each particle moves within the model search space looking for the best solution by updating the parameters values that define it. The most important step in the particle swarm algorithm is the method of representing models which should take into account the number, location and variables of parameters to be updated. One example structural system is used to show the applicability of PSO in finding an optimal FEM. An optimal model is defined as the model that has the least number of updated parameters and has the smallest parameter variable variation from the mean material properties. Two different objective functions are used to compare performance of the PSO algorithm. ",Finite element model selection using Particle Swarm Optimization " Motivated by Zadeh's paradigm of computing with words rather than numbers, several formal models of computing with words have recently been proposed. These models are based on automata and thus are not well-suited for concurrent computing. In this paper, we incorporate the well-known model of concurrent computing, Petri nets, together with fuzzy set theory and thereby establish a concurrency model of computing with words--fuzzy Petri nets for computing with words (FPNCWs). The new feature of such fuzzy Petri nets is that the labels of transitions are some special words modeled by fuzzy sets. By employing the methodology of fuzzy reasoning, we give a faithful extension of an FPNCW which makes it possible for computing with more words. The language expressiveness of the two formal models of computing with words, fuzzy automata for computing with words and FPNCWs, is compared as well. A few small examples are provided to illustrate the theoretical development. ",A Fuzzy Petri Nets Model for Computing With Words " Modelling emotion has become a challenge nowadays. Therefore, several models have been produced in order to express human emotional activity. However, only a few of them are currently able to express the close relationship existing between emotion and cognition. An appraisal-coping model is presented here, with the aim to simulate the emotional impact caused by the evaluation of a particular situation (appraisal), along with the consequent cognitive reaction intended to face the situation (coping). This model is applied to the ""Cascades"" problem, a small arithmetical exercise designed for ten-year-old pupils. The goal is to create a model corresponding to a child's behaviour when solving the problem using his own strategies. ","Emotion: Appraisal-coping model for the ""Cascades"" problem" " Modeling emotion has become a challenge nowadays. Therefore, several models have been produced in order to express human emotional activity. However, only a few of them are currently able to express the close relationship existing between emotion and cognition. An appraisal-coping model is presented here, with the aim to simulate the emotional impact caused by the evaluation of a particular situation (appraisal), along with the consequent cognitive reaction intended to face the situation (coping). This model is applied to the ?Cascades? problem, a small arithmetical exercise designed for ten-year-old pupils. The goal is to create a model corresponding to a child's behavior when solving the problem using his own strategies. ",Emotion : mod\`ele d'appraisal-coping pour le probl\`eme des Cascades " Rough set theory, a mathematical tool to deal with inexact or uncertain knowledge in information systems, has originally described the indiscernibility of elements by equivalence relations. Covering rough sets are a natural extension of classical rough sets by relaxing the partitions arising from equivalence relations to coverings. Recently, some topological concepts such as neighborhood have been applied to covering rough sets. In this paper, we further investigate the covering rough sets based on neighborhoods by approximation operations. We show that the upper approximation based on neighborhoods can be defined equivalently without using neighborhoods. To analyze the coverings themselves, we introduce unary and composition operations on coverings. A notion of homomorphismis provided to relate two covering approximation spaces. We also examine the properties of approximations preserved by the operations and homomorphisms, respectively. ","Covering rough sets based on neighborhoods: An approach without using neighborhoods" " In Pawlak's rough set theory, a set is approximated by a pair of lower and upper approximations. To measure numerically the roughness of an approximation, Pawlak introduced a quantitative measure of roughness by using the ratio of the cardinalities of the lower and upper approximations. Although the roughness measure is effective, it has the drawback of not being strictly monotonic with respect to the standard ordering on partitions. Recently, some improvements have been made by taking into account the granularity of partitions. In this paper, we approach the roughness measure in an axiomatic way. After axiomatically defining roughness measure and partition measure, we provide a unified construction of roughness measure, called strong Pawlak roughness measure, and then explore the properties of this measure. We show that the improved roughness measures in the literature are special instances of our strong Pawlak roughness measure and introduce three more strong Pawlak roughness measures as well. The advantage of our axiomatic approach is that some properties of a roughness measure follow immediately as soon as the measure satisfies the relevant axiomatic definition. ",An axiomatic approach to the roughness measure of rough sets " Adaptation has long been considered as the Achilles' heel of case-based reasoning since it requires some domain-specific knowledge that is difficult to acquire. In this paper, two strategies are combined in order to reduce the knowledge engineering cost induced by the adaptation knowledge (CA) acquisition task: CA is learned from the case base by the means of knowledge discovery techniques, and the CA acquisition sessions are opportunistically triggered, i.e., at problem-solving time. ",Opportunistic Adaptation Knowledge Discovery " Real-time heuristic search algorithms are suitable for situated agents that need to make their decisions in constant time. Since the original work by Korf nearly two decades ago, numerous extensions have been suggested. One of the most intriguing extensions is the idea of backtracking wherein the agent decides to return to a previously visited state as opposed to moving forward greedily. This idea has been empirically shown to have a significant impact on various performance measures. The studies have been carried out in particular empirical testbeds with specific real-time search algorithms that use backtracking. Consequently, the extent to which the trends observed are characteristic of backtracking in general is unclear. In this paper, we present the first entirely theoretical study of backtracking in real-time heuristic search. In particular, we present upper bounds on the solution cost exponential and linear in a parameter regulating the amount of backtracking. The results hold for a wide class of real-time heuristic search algorithms that includes many existing algorithms as a small subclass. ",On Backtracking in Real-time Heuristic Search " This paper presents several novel generalization bounds for the problem of learning kernels based on the analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels has only a log(p) dependency on the number of kernels, p, which is considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a linear combination of p base kernels with an L_2 regularization whose dependency on p is only in p^{1/4}. ",New Generalization Bounds for Learning Kernels " This paper formulates a necessary and sufficient condition for a generic graph matching problem to be equivalent to the maximum vertex and edge weight clique problem in a derived association graph. The consequences of this results are threefold: first, the condition is general enough to cover a broad range of practical graph matching problems; second, a proof to establish equivalence between graph matching and clique search reduces to showing that a given graph matching problem satisfies the proposed condition; and third, the result sets the scene for generic continuous solutions for a broad range of graph matching problems. To illustrate the mathematical framework, we apply it to a number of graph matching problems, including the problem of determining the graph edit distance. ","A Necessary and Sufficient Condition for Graph Matching Being Equivalent to the Maximum Weight Clique Problem" " This paper extends k-means algorithms from the Euclidean domain to the domain of graphs. To recompute the centroids, we apply subgradient methods for solving the optimization-based formulation of the sample mean of graphs. To accelerate the k-means algorithm for graphs without trading computational time against solution quality, we avoid unnecessary graph distance calculations by exploiting the triangle inequality of the underlying distance metric following Elkan's k-means algorithm proposed in \cite{Elkan03}. In experiments we show that the accelerated k-means algorithm are faster than the standard k-means algorithm for graphs provided there is a cluster structure in the data. ",Elkan's k-Means for Graphs " Traditional staging is based on a formal approach of similarity leaning on dramaturgical ontologies and instanciation variations. Inspired by interactive data mining, that suggests different approaches, we give an overview of computer science and theater researches using computers as partners of the actor to escape the a priori specification of roles. ","Similarit\'e en intension vs en extension : \`a la crois\'ee de l'informatique et du th\'e\^atre" " We address the problem of belief change in (nonmonotonic) logic programming under answer set semantics. Unlike previous approaches to belief change in logic programming, our formal techniques are analogous to those of distance-based belief revision in propositional logic. In developing our results, we build upon the model theory of logic programs furnished by SE models. Since SE models provide a formal, monotonic characterisation of logic programs, we can adapt techniques from the area of belief revision to belief change in logic programs. We introduce methods for revising and merging logic programs, respectively. For the former, we study both subset-based revision as well as cardinality-based revision, and we show that they satisfy the majority of the AGM postulates for revision. For merging, we consider operators following arbitration merging and IC merging, respectively. We also present encodings for computing the revision as well as the merging of logic programs within the same logic programming framework, giving rise to a direct implementation of our approach in terms of off-the-shelf answer set solvers. These encodings reflect in turn the fact that our change operators do not increase the complexity of the base formalism. ",A general approach to belief change in answer set programming " Nearly 15 years ago, a set of qualitative spatial relations between oriented straight line segments (dipoles) was suggested by Schlieder. This work received substantial interest amongst the qualitative spatial reasoning community. However, it turned out to be difficult to establish a sound constraint calculus based on these relations. In this paper, we present the results of a new investigation into dipole constraint calculi which uses algebraic methods to derive sound results on the composition of relations and other properties of dipole calculi. Our results are based on a condensed semantics of the dipole relations. In contrast to the points that are normally used, dipoles are extended and have an intrinsic direction. Both features are important properties of natural objects. This allows for a straightforward representation of prototypical reasoning tasks for spatial agents. As an example, we show how to generate survey knowledge from local observations in a street network. The example illustrates the fast constraint-based reasoning capabilities of the dipole calculus. We integrate our results into two reasoning tools which are publicly available. ","Oriented Straight Line Segment Algebra: Qualitative Spatial Reasoning about Oriented Objects" " In this paper we give a thorough presentation of a model proposed by Tononi et al. for modeling \emph{integrated information}, i.e. how much information is generated in a system transitioning from one state to the next one by the causal interaction of its parts and \emph{above and beyond} the information given by the sum of its parts. We also provides a more general formulation of such a model, independent from the time chosen for the analysis and from the uniformity of the probability distribution at the initial time instant. Finally, we prove that integrated information is null for disconnected systems. ",On a Model for Integrated Information " Vector quantization(VQ) is a lossy data compression technique from signal processing, which is restricted to feature vectors and therefore inapplicable for combinatorial structures. This contribution presents a theoretical foundation of graph quantization (GQ) that extends VQ to the domain of attributed graphs. We present the necessary Lloyd-Max conditions for optimality of a graph quantizer and consistency results for optimal GQ design based on empirical distortion measures and stochastic optimization. These results statistically justify existing clustering algorithms in the domain of graphs. The proposed approach provides a template of how to link structural pattern recognition methods other than GQ to statistical pattern recognition. ",Graph Quantization " Human decisional processes result from the employment of selected quantities of relevant information, generally synthesized from environmental incoming data and stored memories. Their main goal is the production of an appropriate and adaptive response to a cognitive or behavioral task. Different strategies of response production can be adopted, among which haphazard trials, formation of mental schemes and heuristics. In this paper, we propose a model of Boolean neural network that incorporates these strategies by recurring to global optimization strategies during the learning session. The model characterizes as well the passage from an unstructured/chaotic attractor neural network typical of data-driven processes to a faster one, forward-only and representative of schema-driven processes. Moreover, a simplified version of the Iowa Gambling Task (IGT) is introduced in order to test the model. Our results match with experimental data and point out some relevant knowledge coming from psychological domain. ","Decisional Processes with Boolean Neural Network: the Emergence of Mental Schemes" " Internet and expert systems have offered new ways of sharing and distributing knowledge, but there is a lack of researches in the area of web based expert systems. This paper introduces a development of a web-based expert system for the regulations of civil service in the Kingdom of Saudi Arabia named as RCSES. It is the first time to develop such system (application of civil service regulations) as well the development of it using web based approach. The proposed system considers 17 regulations of the civil service system. The different phases of developing the RCSES system are presented, as knowledge acquiring and selection, ontology and knowledge representations using XML format. XML Rule-based knowledge sources and the inference mechanisms were implemented using ASP.net technique. An interactive tool for entering the ontology and knowledge base, and the inferencing was built. It gives the ability to use, modify, update, and extend the existing knowledge base in an easy way. The knowledge was validated by experts in the domain of civil service regulations, and the proposed RCSES was tested, verified, and validated by different technical users and the developers staff. The RCSES system is compared with other related web based expert systems, that comparison proved the goodness, usability, and high performance of RCSES. ",Web-Based Expert System for Civil Service Regulations: RCSES " Many engineering optimization problems can be considered as linear programming problems where all or some of the parameters involved are linguistic in nature. These can only be quantified using fuzzy sets. The aim of this paper is to solve a fuzzy linear programming problem in which the parameters involved are fuzzy quantities with logistic membership functions. To explore the applicability of the method a numerical example is considered to determine the monthly production planning quotas and profit of a home textile group. ","Application of a Fuzzy Programming Technique to Production Planning in the Textile Industry" " Mamdani Fuzzy Model is an important technique in Computational Intelligence (CI) study. This paper presents an implementation of a supervised learning method based on membership function training in the context of Mamdani fuzzy models. Specifically, auto zoom function of a digital camera is modelled using Mamdani technique. The performance of control method is verified through a series of simulation and numerical results are provided as illustrations. ","The Application of Mamdani Fuzzy Model for Auto Zoom Function of a Digital Camera" " In this book we introduce a new procedure called \alpha-Discounting Method for Multi-Criteria Decision Making (\alpha-D MCDM), which is as an alternative and extension of Saaty Analytical Hierarchy Process (AHP). It works for any number of preferences that can be transformed into a system of homogeneous linear equations. A degree of consistency (and implicitly a degree of inconsistency) of a decision-making problem are defined. \alpha-D MCDM is afterwards generalized to a set of preferences that can be transformed into a system of linear and or non-linear homogeneous and or non-homogeneous equations and or inequalities. The general idea of \alpha-D MCDM is to assign non-null positive parameters \alpha_1, \alpha_2, and so on \alpha_p to the coefficients in the right-hand side of each preference that diminish or increase them in order to transform the above linear homogeneous system of equations which has only the null-solution, into a system having a particular non-null solution. After finding the general solution of this system, the principles used to assign particular values to all parameters \alpha is the second important part of \alpha-D, yet to be deeper investigated in the future. In the current book we propose the Fairness Principle, i.e. each coefficient should be discounted with the same percentage (we think this is fair: not making any favoritism or unfairness to any coefficient), but the reader can propose other principles. For consistent decision-making problems with pairwise comparisons, \alpha-Discounting Method together with the Fairness Principle give the same result as AHP. But for weak inconsistent decision-making problem, \alpha-Discounting together with the Fairness Principle give a different result from AHP. Many consistent, weak inconsistent, and strong inconsistent examples are given in this book. ",$\alpha$-Discounting Multi-Criteria Decision Making ($\alpha$-D MCDM) " This paper proposes a design for a system to generate constraint solvers that are specialised for specific problem models. It describes the design in detail and gives preliminary experimental results showing the feasibility and effectiveness of the approach. ",Dominion -- A constraint solver generator " Machine Consciousness is the study of consciousness in a biological, philosophical, mathematical and physical perspective and designing a model that can fit into a programmable system architecture. Prime objective of the study is to make the system architecture behave consciously like a biological model does. Present work has developed a feasible definition of consciousness, that characterizes consciousness with four parameters i.e., parasitic, symbiotic, self referral and reproduction. Present work has also developed a biologically inspired consciousness architecture that has following layers: quantum layer, cellular layer, organ layer and behavioral layer and traced the characteristics of consciousness at each layer. Finally, the work has estimated physical and algorithmic architecture to devise a system that can behave consciously. ","Logical Evaluation of Consciousness: For Incorporating Consciousness into Machine Architecture" " To study the communication between information systems, Wang et al. [C. Wang, C. Wu, D. Chen, Q. Hu, and C. Wu, Communicating between information systems, Information Sciences 178 (2008) 3228-3239] proposed two concepts of type-1 and type-2 consistent functions. Some properties of such functions and induced relation mappings have been investigated there. In this paper, we provide an improvement of the aforementioned work by disclosing the symmetric relationship between type-1 and type-2 consistent functions. We present more properties of consistent functions and induced relation mappings and improve upon several deficient assertions in the original work. In particular, we unify and extend type-1 and type-2 consistent functions into the so-called neighborhood-consistent functions. This provides a convenient means for studying the communication between information systems based on various neighborhoods. ",Some improved results on communication between information systems " Recently, Wang et al. discussed the properties of fuzzy information systems under homomorphisms in the paper [C. Wang, D. Chen, L. Zhu, Homomorphisms between fuzzy information systems, Applied Mathematics Letters 22 (2009) 1045-1050], where homomorphisms are based upon the concepts of consistent functions and fuzzy relation mappings. In this paper, we classify consistent functions as predecessor-consistent and successor-consistent, and then proceed to present more properties of consistent functions. In addition, we improve some characterizations of fuzzy relation mappings provided by Wang et al. ",Homomorphisms between fuzzy information systems revisited " In this paper, research on AI based modeling technique to optimize development of new alloys with necessitated improvements in properties and chemical mixture over existing alloys as per functional requirements of product is done. The current research work novels AI in lieu of predictions to establish association between material and product customary. Advanced computational simulation techniques like CFD, FEA interrogations are made viable to authenticate product dynamics in context to experimental investigations. Accordingly, the current research is focused towards binding relationships between material design and product design domains. The input to feed forward back propagation prediction network model constitutes of material design features. Parameters relevant to product design strategies are furnished as target outputs. The outcomes of ANN shows good sign of correlation between material and product design domains. The study enriches a new path to illustrate material factors at the time of new product development. ","Establishment of Relationships between Material Design and Product Design Domains by Hybrid FEM-ANN Technique" " Currently, criminals profile (CP) is obtained from investigators or forensic psychologists interpretation, linking crime scene characteristics and an offenders behavior to his or her characteristics and psychological profile. This paper seeks an efficient and systematic discovery of nonobvious and valuable patterns between variables from a large database of solved cases via a probabilistic network (PN) modeling approach. The PN structure can be used to extract behavioral patterns and to gain insight into what factors influence these behaviors. Thus, when a new case is being investigated and the profile variables are unknown because the offender has yet to be identified, the observed crime scene variables are used to infer the unknown variables based on their connections in the structure and the corresponding numerical (probabilistic) weights. The objective is to produce a more systematic and empirical approach to profiling, and to use the resulting PN model as a decision tool. ",Modeling of Human Criminal Behavior using Probabilistic Networks " Constraint programming can definitely be seen as a model-driven paradigm. The users write programs for modeling problems. These programs are mapped to executable models to calculate the solutions. This paper focuses on efficient model management (definition and transformation). From this point of view, we propose to revisit the design of constraint-programming systems. A model-driven architecture is introduced to map solving-independent constraint models to solving-dependent decision models. Several important questions are examined, such as the need for a visual highlevel modeling language, and the quality of metamodeling techniques to implement the transformations. A main result is the s-COMMA platform that efficiently implements the chain from modeling to solving constraint problems ",Model-Driven Constraint Programming " An important challenge in constraint programming is to rewrite constraint models into executable programs calculat- ing the solutions. This phase of constraint processing may require translations between constraint programming lan- guages, transformations of constraint representations, model optimizations, and tuning of solving strategies. In this paper, we introduce a pivot metamodel describing the common fea- tures of constraint models including different kinds of con- straints, statements like conditionals and loops, and other first-class elements like object classes and predicates. This metamodel is general enough to cope with the constructions of many languages, from object-oriented modeling languages to logic languages, but it is independent from them. The rewriting operations manipulate metamodel instances apart from languages. As a consequence, the rewriting operations apply whatever languages are selected and they are able to manage model semantic information. A bridge is created between the metamodel space and languages using parsing techniques. Tools from the software engineering world can be useful to implement this framework. ",Rewriting Constraint Models with Metamodels " Transforming constraint models is an important task in re- cent constraint programming systems. User-understandable models are defined during the modeling phase but rewriting or tuning them is manda- tory to get solving-efficient models. We propose a new architecture al- lowing to define bridges between any (modeling or solver) languages and to implement model optimizations. This architecture follows a model- driven approach where the constraint modeling process is seen as a set of model transformations. Among others, an interesting feature is the def- inition of transformations as concept-oriented rules, i.e. based on types of model elements where the types are organized into a hierarchy called a metamodel. ","Using ATL to define advanced and flexible constraint model transformations" " The methodology of Bayesian Model Averaging (BMA) is applied for assessment of newborn brain maturity from sleep EEG. In theory this methodology provides the most accurate assessments of uncertainty in decisions. However, the existing BMA techniques have been shown providing biased assessments in the absence of some prior information enabling to explore model parameter space in details within a reasonable time. The lack in details leads to disproportional sampling from the posterior distribution. In case of the EEG assessment of brain maturity, BMA results can be biased because of the absence of information about EEG feature importance. In this paper we explore how the posterior information about EEG features can be used in order to reduce a negative impact of disproportional sampling on BMA performance. We use EEG data recorded from sleeping newborns to test the efficiency of the proposed BMA technique. ","Feature Importance in Bayesian Assessment of Newborn Brain Maturity from EEG" " In this paper we describe an original computational model for solving different types of Distributed Constraint Satisfaction Problems (DCSP). The proposed model is called Controller-Agents for Constraints Solving (CACS). This model is intended to be used which is an emerged field from the integration between two paradigms of different nature: Multi-Agent Systems (MAS) and the Constraint Satisfaction Problem paradigm (CSP) where all constraints are treated in central manner as a black-box. This model allows grouping constraints to form a subset that will be treated together as a local problem inside the controller. Using this model allows also handling non-binary constraints easily and directly so that no translating of constraints into binary ones is needed. This paper presents the implementation outlines of a prototype of DCSP solver, its usage methodology and overview of the CACS application for timetabling problems. ",A new model for solution of complex distributed constrained problems " Model transformations operate on models conforming to precisely defined metamodels. Consequently, it often seems relatively easy to chain them: the output of a transformation may be given as input to a second one if metamodels match. However, this simple rule has some obvious limitations. For instance, a transformation may only use a subset of a metamodel. Therefore, chaining transformations appropriately requires more information. We present here an approach that automatically discovers more detailed information about actual chaining constraints by statically analyzing transformations. The objective is to provide developers who decide to chain transformations with more data on which to base their choices. This approach has been successfully applied to the case of a library of endogenous transformations. They all have the same source and target metamodel but have some hidden chaining constraints. In such a case, the simple metamodel matching rule given above does not provide any useful information. ",Automatically Discovering Hidden Transformation Chaining Constraints " This article presents the results of the research carried out on the development of a medical diagnostic system applied to the Acute Bacterial Meningitis, using the Case Based Reasoning methodology. The research was focused on the implementation of the adaptation stage, from the integration of Case Based Reasoning and Rule Based Expert Systems. In this adaptation stage we use a higher level RBC that stores and allows reutilizing change experiences, combined with a classic rule-based inference engine. In order to take into account the most evident clinical situation, a pre-diagnosis stage is implemented using a rule engine that, given an evident situation, emits the corresponding diagnosis and avoids the complete process. ","Integration of Rule Based Expert Systems and Case Based Reasoning in an Acute Bacterial Meningitis Clinical Decision Support System" " Recent advancement in web services plays an important role in business to business and business to consumer interaction. Discovery mechanism is not only used to find a suitable service but also provides collaboration between service providers and consumers by using standard protocols. A static web service discovery mechanism is not only time consuming but requires continuous human interaction. This paper proposed an efficient dynamic web services discovery mechanism that can locate relevant and updated web services from service registries and repositories with timestamp based on indexing value and categorization for faster and efficient discovery of service. The proposed prototype focuses on quality of service issues and introduces concept of local cache, categorization of services, indexing mechanism, CSP (Constraint Satisfaction Problem) solver, aging and usage of translator. Performance of proposed framework is evaluated by implementing the algorithm and correctness of our method is shown. The results of proposed framework shows greater performance and accuracy in dynamic discovery mechanism of web services resolving the existing issues of flexibility, scalability, based on quality of service, and discovers updated and most relevant services with ease of usage. ",Indexer Based Dynamic Web Services Discovery " Fuzzy Description Logics (DLs) are a family of logics which allow the representation of (and the reasoning with) structured knowledge affected by vagueness. Although most of the not very expressive crisp DLs, such as ALC, enjoy the Finite Model Property (FMP), this is not the case once we move into the fuzzy case. In this paper we show that if we allow arbitrary knowledge bases, then the fuzzy DLs ALC under Lukasiewicz and Product fuzzy logics do not verify the FMP even if we restrict to witnessed models; in other words, finite satisfiability and witnessed satisfiability are different for arbitrary knowledge bases. The aim of this paper is to point out the failure of FMP because it affects several algorithms published in the literature for reasoning under fuzzy ALC. ","On the Failure of the Finite Model Property in some Fuzzy Description Logics" " The basic aim of our study is to give a possible model for handling uncertain information. This model is worked out in the framework of DATALOG. At first the concept of fuzzy Datalog will be summarized, then its extensions for intuitionistic- and interval-valued fuzzy logic is given and the concept of bipolar fuzzy Datalog is introduced. Based on these ideas the concept of multivalued knowledge-base will be defined as a quadruple of any background knowledge; a deduction mechanism; a connecting algorithm, and a function set of the program, which help us to determine the uncertainty levels of the results. At last a possible evaluation strategy is given. ",A multivalued knowledge-base model " RefereeToolbox is a java package implementing combination operators for fusing evidences. It is downloadable from: http://refereefunction.fredericdambreville.com/releases RefereeToolbox is based on an interpretation of the fusion rules by means of Referee Functions. This approach implies a dissociation between the definition of the combination and its actual implementation, which is common to all referee-based combinations. As a result, RefereeToolbox is designed with the aim to be generic and evolutive. ",Release ZERO.0.1 of package RefereeToolbox " LEXSYS, (Legume Expert System) was a project conceived at IITA (International Institute of Tropical Agriculture) Ibadan Nigeria. It was initiated by the COMBS (Collaborative Group on Maize-Based Systems Research in the 1990. It was meant for a general framework for characterizing on-farm testing for technology design for sustainable cereal-based cropping system. LEXSYS is not a true expert system as the name would imply, but simply a user-friendly information system. This work is an attempt to give a formal representation of the existing system and then present areas where intelligent agent can be applied. ",LEXSYS: Architecture and Implication for Intelligent Agent systems " Computing value of information (VOI) is a crucial task in various aspects of decision-making under uncertainty, such as in meta-reasoning for search; in selecting measurements to make, prior to choosing a course of action; and in managing the exploration vs. exploitation tradeoff. Since such applications typically require numerous VOI computations during a single run, it is essential that VOI be computed efficiently. We examine the issue of anytime estimation of VOI, as frequently it suffices to get a crude estimate of the VOI, thus saving considerable computational resources. As a case study, we examine VOI estimation in the measurement selection problem. Empirical evaluation of the proposed scheme in this domain shows that computational resources can indeed be significantly reduced, at little cost in expected rewards achieved in the overall decision problem. ",Rational Value of Information Estimation for Measurement Selection " Formalism based on GA is an alternative to distributed representation models developed so far --- Smolensky's tensor product, Holographic Reduced Representations (HRR) and Binary Spatter Code (BSC). Convolutions are replaced by geometric products, interpretable in terms of geometry which seems to be the most natural language for visualization of higher concepts. This paper recalls the main ideas behind the GA model and investigates recognition test results using both inner product and a clipped version of matrix representation. The influence of accidental blade equality on recognition is also studied. Finally, the efficiency of the GA model is compared to that of previously developed models. ",Geometric Algebra Model of Distributed Representations " We present in this paper some examples of how to compute by hand the PCR5 fusion rule for three sources, so the reader will better understand its mechanism. We also take into consideration the importance of sources, which is different from the classical discounting of sources. ","Importance of Sources using the Repeated Fusion Method and the Proportional Conflict Redistribution Rules #5 and #6" " Terrorism has led to many problems in Thai societies, not only property damage but also civilian casualties. Predicting terrorism activities in advance can help prepare and manage risk from sabotage by these activities. This paper proposes a framework focusing on event classification in terrorism domain using fuzzy inference systems (FISs). Each FIS is a decision-making model combining fuzzy logic and approximate reasoning. It is generated in five main parts: the input interface, the fuzzification interface, knowledge base unit, decision making unit and output defuzzification interface. Adaptive neuro-fuzzy inference system (ANFIS) is a FIS model adapted by combining the fuzzy logic and neural network. The ANFIS utilizes automatic identification of fuzzy logic rules and adjustment of membership function (MF). Moreover, neural network can directly learn from data set to construct fuzzy logic rules and MF implemented in various applications. FIS settings are evaluated based on two comparisons. The first evaluation is the comparison between unstructured and structured events using the same FIS setting. The second comparison is the model settings between FIS and ANFIS for classifying structured events. The data set consists of news articles related to terrorism events in three southern provinces of Thailand. The experimental results show that the classification performance of the FIS resulting from structured events achieves satisfactory accuracy and is better than the unstructured events. In addition, the classification of structured events using ANFIS gives higher performance than the events using only FIS in the prediction of terrorism events. ",Terrorism Event Classification Using Fuzzy Inference Systems " Most of the web user's requirements are search or navigation time and getting correctly matched result. These constrains can be satisfied with some additional modules attached to the existing search engines and web servers. This paper proposes that powerful architecture for search engines with the title of Probabilistic Semantic Web Mining named from the methods used. With the increase of larger and larger collection of various data resources on the World Wide Web (WWW), Web Mining has become one of the most important requirements for the web users. Web servers will store various formats of data including text, image, audio, video etc., but servers can not identify the contents of the data. These search techniques can be improved by adding some special techniques including semantic web mining and probabilistic analysis to get more accurate results. Semantic web mining technique can provide meaningful search of data resources by eliminating useless information with mining process. In this technique web servers will maintain Meta information of each and every data resources available in that particular web server. This will help the search engine to retrieve information that is relevant to user given input string. This paper proposing the idea of combing these two techniques Semantic web mining and Probabilistic analysis for efficient and accurate search results of web mining. SPF can be calculated by considering both semantic accuracy and syntactic accuracy of data with the input string. This will be the deciding factor for producing results. ",Probabilistic Semantic Web Mining Using Artificial Neural Analysis " The Nystrom method is an efficient technique to speed up large-scale learning applications by generating low-rank approximations. Crucial to the performance of this technique is the assumption that a matrix can be well approximated by working exclusively with a subset of its columns. In this work we relate this assumption to the concept of matrix coherence and connect matrix coherence to the performance of the Nystrom method. Making use of related work in the compressed sensing and the matrix completion literature, we derive novel coherence-based bounds for the Nystrom method in the low-rank setting. We then present empirical results that corroborate these theoretical bounds. Finally, we present more general empirical results for the full-rank setting that convincingly demonstrate the ability of matrix coherence to measure the degree to which information can be extracted from a subset of columns. ",Matrix Coherence and the Nystrom Method " We define the concept of an internal symmetry. This is a symmety within a solution of a constraint satisfaction problem. We compare this to solution symmetry, which is a mapping between different solutions of the same problem. We argue that we may be able to exploit both types of symmetry when finding solutions. We illustrate the potential of exploiting internal symmetries on two benchmark domains: Van der Waerden numbers and graceful graphs. By identifying internal symmetries we are able to extend the state of the art in both cases. ",Symmetry within Solutions " We study propagation algorithms for the conjunction of two AllDifferent constraints. Solutions of an AllDifferent constraint can be seen as perfect matchings on the variable/value bipartite graph. Therefore, we investigate the problem of finding simultaneous bipartite matchings. We present an extension of the famous Hall theorem which characterizes when simultaneous bipartite matchings exists. Unfortunately, finding such matchings is NP-hard in general. However, we prove a surprising result that finding a simultaneous matching on a convex bipartite graph takes just polynomial time. Based on this theoretical result, we provide the first polynomial time bound consistency algorithm for the conjunction of two AllDifferent constraints. We identify a pathological problem on which this propagator is exponentially faster compared to existing propagators. Our experiments show that this new propagator can offer significant benefits over existing methods. ",Propagating Conjunctions of AllDifferent Constraints " The successful execution of a construction project is heavily impacted by making the right decision during tendering processes. Managing tender procedures is very complex and uncertain involving coordination of many tasks and individuals with different priorities and objectives. Bias and inconsistent decision are inevitable if the decision-making process is totally depends on intuition, subjective judgement or emotion. In making transparent decision and healthy competition tendering, there exists a need for flexible guidance tool for decision support. Aim of this paper is to give a review on current practices of Decision Support Systems (DSS) technology in construction tendering processes. Current practices of general tendering processes as applied to the most countries in different regions such as United States, Europe, Middle East and Asia are comprehensively discussed. Applications of Web-based tendering processes is also summarised in terms of its properties. Besides that, a summary of Decision Support System (DSS) components is included in the next section. Furthermore, prior researches on implementation of DSS approaches in tendering processes are discussed in details. Current issues arise from both of paper-based and Web-based tendering processes are outlined. Finally, conclusion is included at the end of this paper. ",Decision Support Systems (DSS) in Construction Tendering Processes " The need for integration of ontologies with nonmonotonic rules has been gaining importance in a number of areas, such as the Semantic Web. A number of researchers addressed this problem by proposing a unified semantics for hybrid knowledge bases composed of both an ontology (expressed in a fragment of first-order logic) and nonmonotonic rules. These semantics have matured over the years, but only provide solutions for the static case when knowledge does not need to evolve. In this paper we take a first step towards addressing the dynamics of hybrid knowledge bases. We focus on knowledge updates and, considering the state of the art of belief update, ontology update and rule update, we show that current solutions are only partial and difficult to combine. Then we extend the existing work on ABox updates with rules, provide a semantics for such evolving hybrid knowledge bases and study its basic properties. To the best of our knowledge, this is the first time that an update operator is proposed for hybrid knowledge bases. ",Towards Closed World Reasoning in Dynamic Open Worlds (Extended Version) " On the basis of an analysis of previous research, we present a generalized approach for measuring the difference of plans with an exemplary application to machine scheduling. Our work is motivated by the need for such measures, which are used in dynamic scheduling and planning situations. In this context, quantitative approaches are needed for the assessment of the robustness and stability of schedules. Obviously, any `robustness' or `stability' of plans has to be defined w. r. t. the particular situation and the requirements of the human decision maker. Besides the proposition of an instability measure, we therefore discuss possibilities of obtaining meaningful information from the decision maker for the implementation of the introduced approach. ","On the comparison of plans: Proposition of an instability measure for dynamic machine scheduling" " We define an inference system to capture explanations based on causal statements, using an ontology in the form of an IS-A hierarchy. We first introduce a simple logical language which makes it possible to express that a fact causes another fact and that a fact explains another fact. We present a set of formal inference patterns from causal statements to explanation statements. We introduce an elementary ontology which gives greater expressiveness to the system while staying close to propositional reasoning. We provide an inference system that captures the patterns discussed, firstly in a purely propositional framework, then in a datalog (limited predicate) framework. ",Ontology-based inference for causal explanation " We report (to our knowledge) the first evaluation of Constraint Satisfaction as a computational framework for solving closest string problems. We show that careful consideration of symbol occurrences can provide search heuristics that provide several orders of magnitude speedup at and above the optimal distance. We also report (to our knowledge) the first analysis and evaluation -- using any technique -- of the computational difficulties involved in the identification of all closest strings for a given input set. We describe algorithms for web-scale distributed solution of closest string problems, both purely based on AI backtrack search and also hybrid numeric-AI methods. ",The Exact Closest String Problem as a Constraint Satisfaction Problem " We consider the problem of jointly training structured models for extraction from sources whose instances enjoy partial overlap. This has important applications like user-driven ad-hoc information extraction on the web. Such applications present new challenges in terms of the number of sources and their arbitrary pattern of overlap not seen by earlier collective training schemes applied on two sources. We present an agreement-based learning framework and alternatives within it to trade-off tractability, robustness to noise, and extent of agreement. We provide a principled scheme to discover low-noise agreement sets in unlabeled data across the sources. Through extensive experiments over 58 real datasets, we establish that our method of additively rewarding agreement over maximal segments of text provides the best trade-offs, and also scores over alternatives such as collective inference, staged training, and multi-view learning. ",Joint Structured Models for Extraction from Overlapping Sources " A technique to study the dynamics of solving of a research task is suggested. The research task was based on specially developed software Right- Wrong Responder (RWR), with the participants having to reveal the response logic of the program. The participants interacted with the program in the form of a semi-binary dialogue, which implies the feedback responses of only two kinds - ""right"" or ""wrong"". The technique has been applied to a small pilot group of volunteer participants. Some of them have successfully solved the task (solvers) and some have not (non-solvers). In the beginning of the work, the solvers did more wrong moves than non-solvers, and they did less wrong moves closer to the finish of the work. A phase portrait of the work both in solvers and non-solvers showed definite cycles that may correspond to sequences of partially true hypotheses that may be formulated by the participants during the solving of the task. ","An approach to visualize the course of solving of a research task in humans" " This paper constructively proves the existence of an effective procedure generating a computable (total) function that is not contained in any given effectively enumerable set of such functions. The proof implies the existence of machines that process informal concepts such as computable (total) functions beyond the limits of any given Turing machine or formal system, that is, these machines can, in a certain sense, ""compute"" function values beyond these limits. We call these machines creative. We argue that any ""intelligent"" machine should be capable of processing informal concepts such as computable (total) functions, that is, it should be creative. Finally, we introduce hypotheses on creative machines which were developed on the basis of theoretical investigations and experiments with computer programs. The hypotheses say that machine intelligence is the execution of a self-developing procedure starting from any universal programming language and any input. ",Informal Concepts in Machines " Mountain river torrents and snow avalanches generate human and material damages with dramatic consequences. Knowledge about natural phenomenona is often lacking and expertise is required for decision and risk management purposes using multi-disciplinary quantitative or qualitative approaches. Expertise is considered as a decision process based on imperfect information coming from more or less reliable and conflicting sources. A methodology mixing the Analytic Hierarchy Process (AHP), a multi-criteria aid-decision method, and information fusion using Belief Function Theory is described. Fuzzy Sets and Possibilities theories allow to transform quantitative and qualitative criteria into a common frame of discernment for decision in Dempster-Shafer Theory (DST ) and Dezert-Smarandache Theory (DSmT) contexts. Main issues consist in basic belief assignments elicitation, conflict identification and management, fusion rule choices, results validation but also in specific needs to make a difference between importance and reliability and uncertainty in the fusion process. ","A two-step fusion process for multi-criteria decision applied to natural hazards in mountains" " A lot of mathematical knowledge has been formalized and stored in repositories by now: different mathematical theorems and theories have been taken into consideration and included in mathematical repositories. Applications more distant from pure mathematics, however --- though based on these theories --- often need more detailed knowledge about the underlying theories. In this paper we present an example Mizar formalization from the area of electrical engineering focusing on stability theory which is based on complex analysis. We discuss what kind of special knowledge is necessary here and which amount of this knowledge is included in existing repositories. ",On Building a Knowledge Base for Stability Theory " It is hypothesized that creativity arises from the self-mending capacity of an internal model of the world, or worldview. The uniquely honed worldview of a creative individual results in a distinctive style that is recognizable within and across domains. It is further hypothesized that creativity is domaingeneral in the sense that there exist multiple avenues by which the distinctiveness of one's worldview can be expressed. These hypotheses were tested using art students and creative writing students. Art students guessed significantly above chance both which painting was done by which of five famous artists, and which artwork was done by which of their peers. Similarly, creative writing students guessed significantly above chance both which passage was written by which of five famous writers, and which passage was written by which of their peers. These findings support the hypothesis that creative style is recognizable. Moreover, creative writing students guessed significantly above chance which of their peers produced particular works of art, supporting the hypothesis that creative style is recognizable not just within but across domains. ","Recognizability of Individual Creative Style Within and Across Domains: Preliminary Studies" " Approximate dynamic programming has been used successfully in a large variety of domains, but it relies on a small set of provided approximation features to calculate solutions reliably. Large and rich sets of features can cause existing algorithms to overfit because of a limited number of samples. We address this shortcoming using $L_1$ regularization in approximate linear programming. Because the proposed method can automatically select the appropriate richness of features, its performance does not degrade with an increasing number of features. These results rely on new and stronger sampling bounds for regularized approximate linear programs. We also propose a computationally efficient homotopy method. The empirical evaluation of the approach shows that the proposed method performs well on simple MDPs and standard benchmark problems. ","Feature Selection Using Regularization in Approximate Linear Programs for Markov Decision Processes" " We discuss how to use a Genetic Regulatory Network as an evolutionary representation to solve a typical GP reinforcement problem, the pole balancing. The network is a modified version of an Artificial Regulatory Network proposed a few years ago, and the task could be solved only by finding a proper way of connecting inputs and outputs to the network. We show that the representation is able to generalize well over the problem domain, and discuss the performance of different models of this kind. ",Evolving Genes to Balance a Pole " Programs to solve so-called constraint problems are complex pieces of software which require many design decisions to be made more or less arbitrarily by the implementer. These decisions affect the performance of the finished solver significantly. Once a design decision has been made, it cannot easily be reversed, although a different decision may be more appropriate for a particular problem. We investigate using machine learning to make these decisions automatically depending on the problem to solve with the alldifferent constraint as an example. Our system is capable of making non-trivial, multi-level decisions that improve over always making a default choice. ","Using machine learning to make constraint solver implementation decisions" " In this paper the author presents a kind of Soft Computing Technique, mainly an application of fuzzy set theory of Prof. Zadeh [16], on a problem of Medical Experts Systems. The choosen problem is on design of a physician's decision model which can take crisp as well as fuzzy data as input, unlike the traditional models. The author presents a mathematical model based on fuzzy set theory for physician aided evaluation of a complete representation of information emanating from the initial interview including patient past history, present symptoms, and signs observed upon physical examination and results of clinical and diagnostic tests. ",A Soft Computing Model for Physicians' Decision Process " An important problem in computational social choice theory is the complexity of undesirable behavior among agents, such as control, manipulation, and bribery in election systems. These kinds of voting strategies are often tempting at the individual level but disastrous for the agents as a whole. Creating election systems where the determination of such strategies is difficult is thus an important goal. An interesting set of elections is that of scoring protocols. Previous work in this area has demonstrated the complexity of misuse in cases involving a fixed number of candidates, and of specific election systems on unbounded number of candidates such as Borda. In contrast, we take the first step in generalizing the results of computational complexity of election misuse to cases of infinitely many scoring protocols on an unbounded number of candidates. Interesting families of systems include $k$-approval and $k$-veto elections, in which voters distinguish $k$ candidates from the candidate set. Our main result is to partition the problems of these families based on their complexity. We do so by showing they are polynomial-time computable, NP-hard, or polynomial-time equivalent to another problem of interest. We also demonstrate a surprising connection between manipulation in election systems and some graph theory problems. ",The Complexity of Manipulating $k$-Approval Elections " In the last two decades, a number of methods have been proposed for forecasting based on fuzzy time series. Most of the fuzzy time series methods are presented for forecasting of car road accidents. However, the forecasting accuracy rates of the existing methods are not good enough. In this paper, we compared our proposed new method of fuzzy time series forecasting with existing methods. Our method is based on means based partitioning of the historical data of car road accidents. The proposed method belongs to the kth order and time-variant methods. The proposed method can get the best forecasting accuracy rate for forecasting the car road accidents than the existing methods. ","Inaccuracy Minimization by Partioning Fuzzy Data Sets - Validation of Analystical Methodology" " In this paper, a new learning algorithm for adaptive network intrusion detection using naive Bayesian classifier and decision tree is presented, which performs balance detections and keeps false positives at acceptable level for different types of network attacks, and eliminates redundant attributes as well as contradictory examples from training data that make the detection model complex. The proposed algorithm also addresses some difficulties of data mining such as handling continuous attribute, dealing with missing attribute values, and reducing noise in training data. Due to the large volumes of security audit data as well as the complex and dynamic properties of intrusion behaviours, several data miningbased intrusion detection techniques have been applied to network-based traffic data and host-based data in the last decades. However, there remain various issues needed to be examined towards current intrusion detection systems (IDS). We tested the performance of our proposed algorithm with existing learning algorithms by employing on the KDD99 benchmark intrusion detection dataset. The experimental results prove that the proposed algorithm achieved high detection rates (DR) and significant reduce false positives (FP) for different types of network intrusions using limited computational resources. ",Combining Naive Bayes and Decision Tree for Adaptive Intrusion Detection " This paper presents a combination of several automated reasoning and proof presentation tools with the Mizar system for formalization of mathematics. The combination forms an online service called MizAR, similar to the SystemOnTPTP service for first-order automated reasoning. The main differences to SystemOnTPTP are the use of the Mizar language that is oriented towards human mathematicians (rather than the pure first-order logic used in SystemOnTPTP), and setting the service in the context of the large Mizar Mathematical Library of previous theorems,definitions, and proofs (rather than the isolated problems that are solved in SystemOnTPTP). These differences poses new challenges and new opportunities for automated reasoning and for proof presentation tools. This paper describes the overall structure of MizAR, and presents the automated reasoning systems and proof presentation tools that are combined to make MizAR a useful mathematical service. ","Automated Reasoning and Presentation Support for Formalizing Mathematics in Mizar" " Structured and semi-structured data describing entities, taxonomies and ontologies appears in many domains. There is a huge interest in integrating structured information from multiple sources; however integrating structured data to infer complex common structures is a difficult task because the integration must aggregate similar structures while avoiding structural inconsistencies that may appear when the data is combined. In this work, we study the integration of structured social metadata: shallow personal hierarchies specified by many individual users on the SocialWeb, and focus on inferring a collection of integrated, consistent taxonomies. We frame this task as an optimization problem with structural constraints. We propose a new inference algorithm, which we refer to as Relational Affinity Propagation (RAP) that extends affinity propagation (Frey and Dueck 2007) by introducing structural constraints. We validate the approach on a real-world social media dataset, collected from the photosharing website Flickr. Our empirical results show that our proposed approach is able to construct deeper and denser structures compared to an approach using only the standard affinity propagation algorithm. ",Integrating Structured Metadata with Relational Affinity Propagation " The paper offers a mathematical formalization of the Turing test. This formalization makes it possible to establish the conditions under which some Turing machine will pass the Turing test and the conditions under which every Turing machine (or every Turing machine of the special class) will fail the Turing test. ",A Formalization of the Turing Test " Many social Web sites allow users to annotate the content with descriptive metadata, such as tags, and more recently to organize content hierarchically. These types of structured metadata provide valuable evidence for learning how a community organizes knowledge. For instance, we can aggregate many personal hierarchies into a common taxonomy, also known as a folksonomy, that will aid users in visualizing and browsing social content, and also to help them in organizing their own content. However, learning from social metadata presents several challenges, since it is sparse, shallow, ambiguous, noisy, and inconsistent. We describe an approach to folksonomy learning based on relational clustering, which exploits structured metadata contained in personal hierarchies. Our approach clusters similar hierarchies using their structure and tag statistics, then incrementally weaves them into a deeper, bushier tree. We study folksonomy learning using social metadata extracted from the photo-sharing site Flickr, and demonstrate that the proposed approach addresses the challenges. Moreover, comparing to previous work, the approach produces larger, more accurate folksonomies, and in addition, scales better. ","Growing a Tree in the Forest: Constructing Folksonomies by Integrating Structured Metadata" " Symmetry is an important feature of many constraint programs. We show that any problem symmetry acting on a set of symmetry breaking constraints can be used to break symmetry. Different symmetries pick out different solutions in each symmetry class. This simple but powerful idea can be used in a number of different ways. We describe one application within model restarts, a search technique designed to reduce the conflict between symmetry breaking and the branching heuristic. In model restarts, we restart search periodically with a random symmetry of the symmetry breaking constraints. Experimental results show that this symmetry breaking technique is effective in practice on some standard benchmark problems. ",Symmetries of Symmetry Breaking Constraints " We propose automatically learning probabilistic Hierarchical Task Networks (pHTNs) in order to capture a user's preferences on plans, by observing only the user's behavior. HTNs are a common choice of representation for a variety of purposes in planning, including work on learning in planning. Our contributions are (a) learning structure and (b) representing preferences. In contrast, prior work employing HTNs considers learning method preconditions (instead of structure) and representing domain physics or search control knowledge (rather than preferences). Initially we will assume that the observed distribution of plans is an accurate representation of user preference, and then generalize to the situation where feasibility constraints frequently prevent the execution of preferred plans. In order to learn a distribution on plans we adapt an Expectation-Maximization (EM) technique from the discipline of (probabilistic) grammar induction, taking the perspective of task reductions as productions in a context-free grammar over primitive actions. To account for the difference between the distributions of possible and preferred plans we subsequently modify this core EM technique, in short, by rescaling its input. ","Learning Probabilistic Hierarchical Task Networks to Capture User Preferences" " Brain-Like Stochastic Search (BLiSS) refers to this task: given a family of utility functions U(u,A), where u is a vector of parameters or task descriptors, maximize or minimize U with respect to u, using networks (Option Nets) which input A and learn to generate good options u stochastically. This paper discusses why this is crucial to brain-like intelligence (an area funded by NSF) and to many applications, and discusses various possibilities for network design and training. The appendix discusses recent research, relations to work on stochastic optimization in operations research, and relations to engineering-based approaches to understanding neocortex. ","Brain-Like Stochastic Search: A Research Challenge and Funding Opportunity" " We introduce a framework for representing a variety of interesting problems as inference over the execution of probabilistic model programs. We represent a ""solution"" to such a problem as a guide program which runs alongside the model program and influences the model program's random choices, leading the model program to sample from a different distribution than from its priors. Ideally the guide program influences the model program to sample from the posteriors given the evidence. We show how the KL- divergence between the true posterior distribution and the distribution induced by the guided model program can be efficiently estimated (up to an additive constant) by sampling multiple executions of the guided model program. In addition, we show how to use the guide program as a proposal distribution in importance sampling to statistically prove lower bounds on the probability of the evidence and on the probability of a hypothesis and the evidence. We can use the quotient of these two bounds as an estimate of the conditional probability of the hypothesis given the evidence. We thus turn the inference problem into a heuristic search for better guide programs. ",Variational Program Inference " The basic unit of meaning on the Semantic Web is the RDF statement, or triple, which combines a distinct subject, predicate and object to make a definite assertion about the world. A set of triples constitutes a graph, to which they give a collective meaning. It is upon this simple foundation that the rich, complex knowledge structures of the Semantic Web are built. Yet the very expressiveness of RDF, by inviting comparison with real-world knowledge, highlights a fundamental shortcoming, in that RDF is limited to statements of absolute fact, independent of the context in which a statement is asserted. This is in stark contrast with the thoroughly context-sensitive nature of human thought. The model presented here provides a particularly simple means of contextualizing an RDF triple by associating it with related statements in the same graph. This approach, in combination with a notion of graph similarity, is sufficient to select only those statements from an RDF graph which are subjectively most relevant to the context of the requesting process. ",The Dilated Triple " In this Information system age many organizations consider information system as their weapon to compete or gain competitive advantage or give the best services for non profit organizations. Game Information System as combining Information System and game is breakthrough to achieve organizations' performance. The Game Information System will run the Information System with game and how game can be implemented to run the Information System. Game is not only for fun and entertainment, but will be a challenge to combine fun and entertainment with Information System. The Challenge to run the information system with entertainment, deliver the entertainment with information system all at once. Game information system can be implemented in many sectors as like the information system itself but in difference's view. A view of game which people can joy and happy and do their transaction as a fun things. ",Game Information System " In order to get strategic positioning for competition in business organization, the information system must be ahead in this information age where the information as one of the weapons to win the competition and in the right hand the information will become a right bullet. The information system with the information technology support isn't enough if just only on internet or implemented with internet technology. The growth of information technology as tools for helping and making people easy to use must be accompanied by wanting to make fun and happy when they make contact with the information technology itself. Basically human like to play, since childhood human have been playing, free and happy and when human grow up they can't play as much as when human was in their childhood. We have to develop the information system which is not perform information system itself but can help human to explore their natural instinct for playing, making fun and happiness when they interact with the information system. Virtual information system is the way to present playing and having fun atmosphere on working area. ",Virtual information system on working area " Earthquake DSS is an information technology environment which can be used by government to sharpen, make faster and better the earthquake mitigation decision. Earthquake DSS can be delivered as E-government which is not only for government itself but in order to guarantee each citizen's rights for education, training and information about earthquake and how to overcome the earthquake. Knowledge can be managed for future use and would become mining by saving and maintain all the data and information about earthquake and earthquake mitigation in Indonesia. Using Web technology will enhance global access and easy to use. Datawarehouse as unNormalized database for multidimensional analysis will speed the query process and increase reports variation. Link with other Disaster DSS in one national disaster DSS, link with other government information system and international will enhance the knowledge and sharpen the reports. ",Indonesian Earthquake Decision Support System " Markov decision processes (MDPs) are widely used for modeling decision-making problems in robotics, automated control, and economics. Traditional MDPs assume that the decision maker (DM) knows all states and actions. However, this may not be true in many situations of interest. We define a new framework, MDPs with unawareness (MDPUs) to deal with the possibilities that a DM may not be aware of all possible actions. We provide a complete characterization of when a DM can learn to play near-optimally in an MDPU, and give an algorithm that learns to play near-optimally when it is possible to do so, as efficiently as possible. In particular, we characterize when a near-optimal solution can be found in polynomial time. ",MDPs with Unawareness " Existing value function approximation methods have been successfully used in many applications, but they often lack useful a priori error bounds. We propose a new approximate bilinear programming formulation of value function approximation, which employs global optimization. The formulation provides strong a priori guarantees on both robust and expected policy loss by minimizing specific norms of the Bellman residual. Solving a bilinear program optimally is NP-hard, but this is unavoidable because the Bellman-residual minimization itself is NP-hard. We describe and analyze both optimal and approximate algorithms for solving bilinear programs. The analysis shows that this algorithm offers a convergent generalization of approximate policy iteration. We also briefly analyze the behavior of bilinear programming algorithms under incomplete samples. Finally, we demonstrate that the proposed approach can consistently minimize the Bellman residual on simple benchmark problems. ",Global Optimization for Value Function Approximation " Different notions of equivalence, such as the prominent notions of strong and uniform equivalence, have been studied in Answer-Set Programming, mainly for the purpose of identifying programs that can serve as substitutes without altering the semantics, for instance in program optimization. Such semantic comparisons are usually characterized by various selections of models in the logic of Here-and-There (HT). For uniform equivalence however, correct characterizations in terms of HT-models can only be obtained for finite theories, respectively programs. In this article, we show that a selection of countermodels in HT captures uniform equivalence also for infinite theories. This result is turned into coherent characterizations of the different notions of equivalence by countermodels, as well as by a mixture of HT-models and countermodels (so-called equivalence interpretations). Moreover, we generalize the so-called notion of relativized hyperequivalence for programs to propositional theories, and apply the same methodology in order to obtain a semantic characterization which is amenable to infinite settings. This allows for a lifting of the results to first-order theories under a very general semantics given in terms of a quantified version of HT. We thus obtain a general framework for the study of various notions of equivalence for theories under answer-set semantics. Moreover, we prove an expedient property that allows for a simplified treatment of extended signatures, and provide further results for non-ground logic programs. In particular, uniform equivalence coincides under open and ordinary answer-set semantics, and for finite non-ground programs under these semantics, also the usual characterization of uniform equivalence in terms of maximal and total HT-models of the grounding is correct, even for infinite domains, when corresponding ground programs are infinite. ","A General Framework for Equivalences in Answer-Set Programming by Countermodels in the Logic of Here-and-There" " Human disease diagnosis is a complicated process and requires high level of expertise. Any attempt of developing a web-based expert system dealing with human disease diagnosis has to overcome various difficulties. This paper describes a project work aiming to develop a web-based fuzzy expert system for diagnosing human diseases. Now a days fuzzy systems are being used successfully in an increasing number of application areas; they use linguistic rules to describe systems. This research project focuses on the research and development of a web-based clinical tool designed to improve the quality of the exchange of health information between health care professionals and patients. Practitioners can also use this web-based tool to corroborate diagnosis. The proposed system is experimented on various scenarios in order to evaluate it's performance. In all the cases, proposed system exhibits satisfactory results. ",Human Disease Diagnosis Using a Fuzzy Expert System " In the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times and today with the advent of the computer and 50 years of research into various programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chessplayer, and countless other feats never before possible. Ability of the human to estimate the information is most brightly shown in using of natural languages. Using words of a natural language for valuation qualitative attributes, for example, the person pawns uncertainty in form of vagueness in itself estimations. Vague sets, vague judgments, vague conclusions takes place there and then, where and when the reasonable subject exists and also is interested in something. The vague sets theory has arisen as the answer to an illegibility of language the reasonable subject speaks. Language of a reasonable subject is generated by vague events which are created by the reason and which are operated by the mind. The theory of vague sets represents an attempt to find such approximation of vague grouping which would be more convenient, than the classical theory of sets in situations where the natural language plays a significant role. Such theory has been offered by known American mathematician Gau and Buehrer .In our paper we are describing how vagueness of linguistic variables can be solved by using the vague set theory.This paper is mainly designed for one of directions of the eventology (the theory of the random vague events), which has arisen within the limits of the probability theory and which pursue the unique purpose to describe eventologically a movement of reason. ",Vagueness of Linguistic variable " Ontologies usually suffer from the semantic heterogeneity when simultaneously used in information sharing, merging, integrating and querying processes. Therefore, the similarity identification between ontologies being used becomes a mandatory task for all these processes to handle the problem of semantic heterogeneity. In this paper, we propose an efficient technique for similarity measurement between two ontologies. The proposed technique identifies all candidate pairs of similar concepts without omitting any similar pair. The proposed technique can be used in different types of operations on ontologies such as merging, mapping and aligning. By analyzing its results a reasonable improvement in terms of completeness, correctness and overall quality of the results has been found. ",An Efficient Technique for Similarity Identification between Ontologies " Many formal languages have been proposed to express or represent Ontologies, including RDF, RDFS, DAML+OIL and OWL. Most of these languages are based on XML syntax, but with various terminologies and expressiveness. Therefore, choosing a language for building an Ontology is the main step. The main point of choosing language to represent Ontology is based mainly on what the Ontology will represent or be used for. That language should have a range of quality support features such as ease of use, expressive power, compatibility, sharing and versioning, internationalisation. This is because different kinds of knowledge-based applications need different language features. The main objective of these languages is to add semantics to the existing information on the web. The aims of this paper is to provide a good knowledge of existing language and understanding of these languages and how could be used. ",The State of the Art: Ontology Web-Based Languages: XML Based " Semantic Web is actually an extension of the current one in that it represents information more meaningfully for humans and computers alike. It enables the description of contents and services in machine-readable form, and enables annotating, discovering, publishing, advertising and composing services to be automated. It was developed based on Ontology, which is considered as the backbone of the Semantic Web. In other words, the current Web is transformed from being machine-readable to machine-understandable. In fact, Ontology is a key technique with which to annotate semantics and provide a common, comprehensible foundation for resources on the Semantic Web. Moreover, Ontology can provide a common vocabulary, a grammar for publishing data, and can supply a semantic description of data which can be used to preserve the Ontologies and keep them ready for inference. This paper provides basic concepts of web services and the Semantic Web, defines the structure and the main applications of ontology, and provides many relevant terms are explained in order to provide a basic understanding of ontologies. ",Understanding Semantic Web and Ontologies: Theory and Applications " Finding the structure of a graphical model has been received much attention in many fields. Recently, it is reported that the non-Gaussianity of data enables us to identify the structure of a directed acyclic graph without any prior knowledge on the structure. In this paper, we propose a novel non-Gaussianity based algorithm for more general type of models; chain graphs. The algorithm finds an ordering of the disjoint subsets of variables by iteratively evaluating the independence between the variable subset and the residuals when the remaining variables are regressed on those. However, its computational cost grows exponentially according to the number of variables. Therefore, we further discuss an efficient approximate approach for applying the algorithm to large sized graphs. We illustrate the algorithm with artificial and real-world datasets. ",GroupLiNGAM: Linear non-Gaussian acyclic models for sets of variables " Notions of core, support and inversion of a soft set have been defined and studied. Soft approximations are soft sets developed through core and support, and are used for granulating the soft space. Membership structure of a soft set has been probed in and many interesting properties presented. The mathematical apparatus developed so far in this paper yields a detailed analysis of two works viz. [N. Cagman, S. Enginoglu, Soft set theory and uni-int decision making, European Jr. of Operational Research (article in press, available online 12 May 2010)] and [N. Cagman, S. Enginoglu, Soft matrix theory and its decision making, Computers and Mathematics with Applications 59 (2010) 3308 - 3314.]. We prove (Theorem 8.1) that uni-int method of Cagman is equivalent to a core-support expression which is computationally far less expansive than uni-int. This also highlights some shortcomings in Cagman's uni-int method and thus motivates us to improve the method. We first suggest an improvement in uni-int method and then present a new conjecture to solve the optimum choice problem given by Cagman and Enginoglu. Our Example 8.6 presents a case where the optimum choice is intuitively clear yet both uni-int methods (Cagman's and our improved one) give wrong answer but the new conjecture solves the problem correctly. ",Soft Approximations and uni-int Decision Making " In recent years there has been growing interest in solutions for the delivery of clinical care for the elderly, due to the large increase in aging population. Monitoring a patient in his home environment is necessary to ensure continuity of care in home settings, but, to be useful, this activity must not be too invasive for patients and a burden for caregivers. We prototyped a system called SINDI (Secure and INDependent lIving), focused on i) collecting a limited amount of data about the person and the environment through Wireless Sensor Networks (WSN), and ii) inferring from these data enough information to support caregivers in understanding patients' well being and in predicting possible evolutions of their health. Our hierarchical logic-based model of health combines data from different sources, sensor data, tests results, common-sense knowledge and patient's clinical profile at the lower level, and correlation rules between health conditions across upper levels. The logical formalization and the reasoning process are based on Answer Set Programming. The expressive power of this logic programming paradigm makes it possible to reason about health evolution even when the available information is incomplete and potentially incoherent, while declarativity simplifies rules specification by caregivers and allows automatic encoding of knowledge. This paper describes how these issues have been targeted in the application scenario of the SINDI system. ","Reasoning Support for Risk Prediction and Prevention in Independent Living" " Iris recognition technology, used to identify individuals by photographing the iris of their eye, has become popular in security applications because of its ease of use, accuracy, and safety in controlling access to high-security areas. Fusion of multiple algorithms for biometric verification performance improvement has received considerable attention. The proposed method combines the zero-crossing 1 D wavelet Euler number, and genetic algorithm based for feature extraction. The output from these three algorithms is normalized and their score are fused to decide whether the user is genuine or imposter. This new strategies is discussed in this paper, in order to compute a multimodal combined score. ",Improving Iris Recognition Accuracy By Score Based Fusion Method " We study decompositions of the global NVALUE constraint. Our main contribution is theoretical: we show that there are propagators for global constraints like NVALUE which decomposition can simulate with the same time complexity but with a much greater space complexity. This suggests that the benefit of a global propagator may often not be in saving time but in saving space. Our other theoretical contribution is to show for the first time that range consistency can be enforced on NVALUE with the same worst-case time complexity as bound consistency. Finally, the decompositions we study are readily encoded as linear inequalities. We are therefore able to use them in integer linear programs. ",Decomposition of the NVALUE constraint " Symmetry can be used to help solve many problems. For instance, Einstein's famous 1905 paper (""On the Electrodynamics of Moving Bodies"") uses symmetry to help derive the laws of special relativity. In artificial intelligence, symmetry has played an important role in both problem representation and reasoning. I describe recent work on using symmetry to help solve constraint satisfaction problems. Symmetries occur within individual solutions of problems as well as between different solutions of the same problem. Symmetry can also be applied to the constraints in a problem to give new symmetric constraints. Reasoning about symmetry can speed up problem solving, and has led to the discovery of new results in both graph and number theory. ",Symmetry within and between solutions " We propose an online form of the cake cutting problem. This models situations where players arrive and depart during the process of dividing a resource. We show that well known fair division procedures like cut-and-choose and the Dubins-Spanier moving knife procedure can be adapted to apply to such online problems. We propose some desirable properties that online cake cutting procedures might possess like online forms of proportionality and envy-freeness, and identify which properties are in fact possessed by the different online cake procedures. ",Online Cake Cutting " The stable marriage problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools, or more generally to any two-sided market. We consider a useful variation of the stable marriage problem, where the men and women express their preferences using a preference list with ties over a subset of the members of the other sex. Matchings are permitted only with people who appear in these preference lists. In this setting, we study the problem of finding a stable matching that marries as many people as possible. Stability is an envy-free notion: no man and woman who are not married to each other would both prefer each other to their partners or to being single. This problem is NP-hard. We tackle this problem using local search, exploiting properties of the problem to reduce the size of the neighborhood and to make local moves efficiently. Experimental results show that this approach is able to solve large problems, quickly returning stable matchings of large and often optimal size. ",Local search for stable marriage problems with ties and incomplete lists " Frequent Episode Discovery framework is a popular framework in Temporal Data Mining with many applications. Over the years many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper we present a unified view of all such frequency counting algorithms. We present a generic algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies and we present quantitative relationships among different frequencies. Our unified view also helps in obtaining correctness proofs for various algorithms as we show here. We also point out how this unified view helps us to consider generalization of the algorithm so that they can discover episodes with general partial orders. ","A unified view of Automata-based algorithms for Frequent Episode Discovery" " The stable marriage (SM) problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools, or more generally to any two-sided market. In the classical formulation, n men and n women express their preferences (via a strict total order) over the members of the other sex. Solving a SM problem means finding a stable marriage where stability is an envy-free notion: no man and woman who are not married to each other would both prefer each other to their partners or to being single. We consider both the classical stable marriage problem and one of its useful variations (denoted SMTI) where the men and women express their preferences in the form of an incomplete preference list with ties over a subset of the members of the other sex. Matchings are permitted only with people who appear in these lists, an we try to find a stable matching that marries as many people as possible. Whilst the SM problem is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both problems via a local search approach, which exploits properties of the problems to reduce the size of the neighborhood and to make local moves efficiently. We evaluate empirically our algorithm for SM problems by measuring its runtime behaviour and its ability to sample the lattice of all possible stable marriages. We evaluate our algorithm for SMTI problems in terms of both its runtime behaviour and its ability to find a maximum cardinality stable marriage.For SM problems, the number of steps of our algorithm grows only as O(nlog(n)), and that it samples very well the set of all stable marriages. It is thus a fair and efficient approach to generate stable marriages.Furthermore, our approach for SMTI problems is able to solve large problems, quickly returning stable matchings of large and often optimal size despite the NP-hardness of this problem. ",Local search for stable marriage problems " From the advent of the application of satellite imagery to land cover mapping, one of the growing areas of research interest has been in the area of image classification. Image classifiers are algorithms used to extract land cover information from satellite imagery. Most of the initial research has focussed on the development and application of algorithms to better existing and emerging classifiers. In this paper, a paradigm shift is proposed whereby a committee of classifiers is used to determine the final classification output. Two of the key components of an ensemble system are that there should be diversity among the classifiers and that there should be a mechanism through which the results are combined. In this paper, the members of the ensemble system include: Linear SVM, Gaussian SVM and Quadratic SVM. The final output was determined through a simple majority vote of the individual classifiers. From the results obtained it was observed that the final derived map generated by an ensemble system can potentially improve on the results derived from the individual classifiers making up the ensemble system. The ensemble system classification accuracy was, in this case, better than the linear and quadratic SVM result. It was however less than that of the RBF SVM. Areas for further research could focus on improving the diversity of the ensemble system used in this research. ",An svm multiclassifier approach to land cover mapping " A general method is given for revising degrees of belief and arriving at consistent decisions about a system of logically constrained issues. In contrast to other works about belief revision, here the constraints are assumed to be fixed. The method has two variants, dual of each other, whose revised degrees of belief are respectively above and below the original ones. The upper [resp. lower] revised degrees of belief are uniquely characterized as the lowest [resp. highest] ones that are invariant by a certain max-min [resp. min-max] operation determined by the logical constraints. In both variants, making balance between the revised degree of belief of a proposition and that of its negation leads to decisions that are ensured to be consistent with the logical constraints. These decisions are ensured to agree with the majority criterion as applied to the original degrees of belief whenever this gives a consistent result. They are also also ensured to satisfy a property of respect for unanimity about any particular issue, as well as a property of monotonicity with respect to the original degrees of belief. The application of the method to certain special domains comes down to well established or increasingly accepted methods, such as the single-link method of cluster analysis and the method of paths in preferential voting. ",A general method for deciding about logically constrained issues " Strategic Environmental Assessment is a procedure aimed at introducing systematic assessment of the environmental effects of plans and programs. This procedure is based on the so-called coaxial matrices that define dependencies between plan activities (infrastructures, plants, resource extractions, buildings, etc.) and positive and negative environmental impacts, and dependencies between these impacts and environmental receptors. Up to now, this procedure is manually implemented by environmental experts for checking the environmental effects of a given plan or program, but it is never applied during the plan/program construction. A decision support system, based on a clear logic semantics, would be an invaluable tool not only in assessing a single, already defined plan, but also during the planning process in order to produce an optimized, environmentally assessed plan and to study possible alternative scenarios. We propose two logic-based approaches to the problem, one based on Constraint Logic Programming and one on Probabilistic Logic Programming that could be, in the future, conveniently merged to exploit the advantages of both. We test the proposed approaches on a real energy plan and we discuss their limitations and advantages. ",Logic-Based Decision Support for Strategic Environmental Assessment " Hybrid MKNF knowledge bases are one of the most prominent tightly integrated combinations of open-world ontology languages with closed-world (non-monotonic) rule paradigms. The definition of Hybrid MKNF is parametric on the description logic (DL) underlying the ontology language, in the sense that non-monotonic rules can extend any decidable DL language. Two related semantics have been defined for Hybrid MKNF: one that is based on the Stable Model Semantics for logic programs and one on the Well-Founded Semantics (WFS). Under WFS, the definition of Hybrid MKNF relies on a bottom-up computation that has polynomial data complexity whenever the DL language is tractable. Here we define a general query-driven procedure for Hybrid MKNF that is sound with respect to the stable model-based semantics, and sound and complete with respect to its WFS variant. This procedure is able to answer a slightly restricted form of conjunctive queries, and is based on tabled rule evaluation extended with an external oracle that captures reasoning within the ontology. Such an (abstract) oracle receives as input a query along with knowledge already derived, and replies with a (possibly empty) set of atoms, defined in the rules, whose truth would suffice to prove the initial query. With appropriate assumptions on the complexity of the abstract oracle, the general procedure maintains the data complexity of the WFS for Hybrid MKNF knowledge bases. To illustrate this approach, we provide a concrete oracle for EL+, a fragment of the light-weight DL EL++. Such an oracle has practical use, as EL++ is the language underlying OWL 2 EL, which is part of the W3C recommendations for the Semantic Web, and is tractable for reasoning tasks such as subsumption. We show that query-driven Hybrid MKNF preserves polynomial data complexity when using the EL+ oracle and WFS. ",Query-driven Procedures for Hybrid MKNF Knowledge Bases " Answer set programming - the most popular problem solving paradigm based on logic programs - has been recently extended to support uninterpreted function symbols. All of these approaches have some limitation. In this paper we propose a class of programs called FP2 that enjoys a different trade-off between expressiveness and complexity. FP2 programs enjoy the following unique combination of properties: (i) the ability of expressing predicates with infinite extensions; (ii) full support for predicates with arbitrary arity; (iii) decidability of FP2 membership checking; (iv) decidability of skeptical and credulous stable model reasoning for call-safe queries. Odd cycles are supported by composing FP2 programs with argument restricted programs. ",A decidable subclass of finitary programs " This paper models a decision support system to predict the occurance of suicide attack in a given collection of cities. The system comprises two parts. First part analyzes and identifies the factors which affect the prediction. Admitting incomplete information and use of linguistic terms by experts, as two characteristic features of this peculiar prediction problem we exploit the Theory of Fuzzy Soft Sets. Hence the Part 2 of the model is an algorithm vz. FSP which takes the assessment of factors given in Part 1 as its input and produces a possibility profile of cities likely to receive the accident. The algorithm is of O(2^n) complexity. It has been illustrated by an example solved in detail. Simulation results for the algorithm have been presented which give insight into the strengths and weaknesses of FSP. Three different decision making measures have been simulated and compared in our discussion. ",Predicting Suicide Attacks: A Fuzzy Soft Set Approach " An approach to the revision of logic programs under the answer set semantics is presented. For programs P and Q, the goal is to determine the answer sets that correspond to the revision of P by Q, denoted P * Q. A fundamental principle of classical (AGM) revision, and the one that guides the approach here, is the success postulate. In AGM revision, this stipulates that A is in K * A. By analogy with the success postulate, for programs P and Q, this means that the answer sets of Q will in some sense be contained in those of P * Q. The essential idea is that for P * Q, a three-valued answer set for Q, consisting of positive and negative literals, is first determined. The positive literals constitute a regular answer set, while the negated literals make up a minimal set of naf literals required to produce the answer set from Q. These literals are propagated to the program P, along with those rules of Q that are not decided by these literals. The approach differs from work in update logic programs in two main respects. First, we ensure that the revising logic program has higher priority, and so we satisfy the success postulate; second, for the preference implicit in a revision P * Q, the program Q as a whole takes precedence over P, unlike update logic programs, since answer sets of Q are propagated to P. We show that a core group of the AGM postulates are satisfied, as are the postulates that have been proposed for update logic programs. ","A Program-Level Approach to Revising Logic Programs under the Answer Set Semantics" " We study the problem of coalitional manipulation in elections using the unweighted Borda rule. We provide empirical evidence of the manipulability of Borda elections in the form of two new greedy manipulation algorithms based on intuitions from the bin-packing and multiprocessor scheduling domains. Although we have not been able to show that these algorithms beat existing methods in the worst-case, our empirical evaluation shows that they significantly outperform the existing method and are able to find optimal manipulations in the vast majority of the randomly generated elections that we tested. These empirical results provide further evidence that the Borda rule provides little defense against coalitional manipulation. ",An Empirical Study of Borda Manipulation " Autonomous planetary vehicles, also known as rovers, are small autonomous vehicles equipped with a variety of sensors used to perform exploration and experiments on a planet's surface. Rovers work in a partially unknown environment, with narrow energy/time/movement constraints and, typically, small computational resources that limit the complexity of on-line planning and scheduling, thus they represent a great challenge in the field of autonomous vehicles. Indeed, formal models for such vehicles usually involve hybrid systems with nonlinear dynamics, which are difficult to handle by most of the current planning algorithms and tools. Therefore, when offline planning of the vehicle activities is required, for example for rovers that operate without a continuous Earth supervision, such planning is often performed on simplified models that are not completely realistic. In this paper we show how the UPMurphi model checking based planning tool can be used to generate resource-optimal plans to control the engine of an autonomous planetary vehicle, working directly on its hybrid model and taking into account several safety constraints, thus achieving very accurate results. ",Resource-Optimal Planning For An Autonomous Planetary Vehicle " This paper presents the solution about the threat of a VBIED (Vehicle-Born Improvised Explosive Device) obtained with the DSmT (Dezert-Smarandache Theory). This problem has been proposed recently to the authors by Simon Maskell and John Lavery as a typical illustrative example to try to compare the different approaches for dealing with uncertainty for decision-making support. The purpose of this paper is to show in details how a solid justified solution can be obtained from DSmT approach and its fusion rules thanks to a proper modeling of the belief functions involved in this problem. ","Threat assessment of a possible Vehicle-Born Improvised Explosive Device using DSmT" " A key factor that can dramatically reduce the search space during constraint solving is the criterion under which the variable to be instantiated next is selected. For this purpose numerous heuristics have been proposed. Some of the best of such heuristics exploit information about failures gathered throughout search and recorded in the form of constraint weights, while others measure the importance of variable assignments in reducing the search space. In this work we experimentally evaluate the most recent and powerful variable ordering heuristics, and new variants of them, over a wide range of benchmarks. Results demonstrate that heuristics based on failures are in general more efficient. Based on this, we then derive new revision ordering heuristics that exploit recorded failures to efficiently order the propagation list when arc consistency is maintained during search. Interestingly, in addition to reducing the number of constraint checks and list operations, these heuristics are also able to cut down the size of the explored search tree. ","Evaluating and Improving Modern Variable and Revision Ordering Strategies in CSPs" " The two standard branching schemes for CSPs are d-way and 2-way branching. Although it has been shown that in theory the latter can be exponentially more effective than the former, there is a lack of empirical evidence showing such differences. To investigate this, we initially make an experimental comparison of the two branching schemes over a wide range of benchmarks. Experimental results verify the theoretical gap between d-way and 2-way branching as we move from a simple variable ordering heuristic like dom to more sophisticated ones like dom/ddeg. However, perhaps surprisingly, experiments also show that when state-of-the-art variable ordering heuristics like dom/wdeg are used then d-way can be clearly more efficient than 2-way branching in many cases. Motivated by this observation, we develop two generic heuristics that can be applied at certain points during search to decide whether 2-way branching or a restricted version of 2-way branching, which is close to d-way branching, will be followed. The application of these heuristics results in an adaptive branching scheme. Experiments with instantiations of the two generic heuristics confirm that search with adaptive branching outperforms search with a fixed branching scheme on a wide range of problems. ",Adaptive Branching for Constraint Satisfaction Problems " Event-driven automation of reactive functionalities for complex event processing is an urgent need in today's distributed service-oriented architectures and Web-based event-driven environments. An important problem to be addressed is how to correctly and efficiently capture and process the event-based behavioral, reactive logic embodied in reaction rules, and combining this with other conditional decision logic embodied, e.g., in derivation rules. This paper elaborates a homogeneous integration approach that combines derivation rules, reaction rules and other rule types such as integrity constraints into the general framework of logic programming, the industrial-strength version of declarative programming. We describe syntax and semantics of the language, implement a distributed web-based middleware using enterprise service technologies and illustrate its adequacy in terms of expressiveness, efficiency and scalability through examples extracted from industrial use cases. The developed reaction rule language provides expressive features such as modular ID-based updates with support for external imports and self-updates of the intensional and extensional knowledge bases, transactions including integrity testing and roll-backs of update transition paths. It also supports distributed complex event processing, event messaging and event querying via efficient and scalable enterprise middleware technologies and event/action reasoning based on an event/action algebra implemented by an interval-based event calculus variant as a logic inference formalism. ",A Homogeneous Reaction Rule Language for Complex Event Processing " The four intensive problems to the software rose by the software industry .i.e., User System Communication / Human Machine Interface, Meta Data extraction, Information processing & management and Data representation are discussed in this research paper. To contribute in the field we have proposed and described an intelligent semantic oriented agent based search engine including the concepts of intelligent graphical user interface, natural language based information processing, data management and data reconstruction for the final user end information representation. ","Semantic Oriented Agent based Approach towards Engineering Data Management, Web Information Retrieval and User System Communication Problems" " Web development is a challenging research area for its creativity and complexity. The existing raised key challenge in web technology technologic development is the presentation of data in machine read and process able format to take advantage in knowledge based information extraction and maintenance. Currently it is not possible to search and extract optimized results using full text queries because there is no such mechanism exists which can fully extract the semantic from full text queries and then look for particular knowledge based information. ","An Agent based Approach towards Metadata Extraction, Modelling and Information Retrieval over the Web" " In order to study the communication between information systems, Gong and Xiao [Z. Gong and Z. Xiao, Communicating between information systems based on including degrees, International Journal of General Systems 39 (2010) 189--206] proposed the concept of general relation mappings based on including degrees. Some properties and the extension for fuzzy information systems of the general relation mappings have been investigated there. In this paper, we point out by counterexamples that several assertions (Lemma 3.1, Lemma 3.2, Theorem 4.1, and Theorem 4.3) in the aforementioned work are not true in general. ","A note on communicating between information systems based on including degrees" " World Wide Web (WWW) is the most popular global information sharing and communication system consisting of three standards .i.e., Uniform Resource Identifier (URL), Hypertext Transfer Protocol (HTTP) and Hypertext Mark-up Language (HTML). Information is provided in text, image, audio and video formats over the web by using HTML which is considered to be unconventional in defining and formalizing the meaning of the context... ",Role of Ontology in Semantic Web Development " Recent research has highlighted the practical benefits of subjective interestingness measures, which quantify the novelty or unexpectedness of a pattern when contrasted with any prior information of the data miner (Silberschatz and Tuzhilin, 1995; Geng and Hamilton, 2006). A key challenge here is the formalization of this prior information in a way that lends itself to the definition of an interestingness subjective measure that is both meaningful and practical. In this paper, we outline a general strategy of how this could be achieved, before working out the details for a use case that is important in its own right. Our general strategy is based on considering prior information as constraints on a probabilistic model representing the uncertainty about the data. More specifically, we represent the prior information by the maximum entropy (MaxEnt) distribution subject to these constraints. We briefly outline various measures that could subsequently be used to contrast patterns with this MaxEnt model, thus quantifying their subjective interestingness. ","Maximum entropy models and subjective interestingness: an application to tiles in binary databases" " We examine the practicality for a user of using Answer Set Programming (ASP) for representing logical formalisms. Our example is a formalism aiming at capturing causal explanations from causal information. We show the naturalness and relative efficiency of this translation job. We are interested in the ease for writing an ASP program. Limitations of the earlier systems made that in practice, the ``declarative aspect'' was more theoretical than practical. We show how recent improvements in working ASP systems facilitate the translation. ","A formalism for causal explanations with an Answer Set Programming translation" " Knowledge Management is a global process in companies. It includes all the processes that allow capitalization, sharing and evolution of the Knowledge Capital of the firm, generally recognized as a critical resource of the organization. Several approaches have been defined to capitalize knowledge but few of them study how to learn from this knowledge. We present in this paper an approach that helps to enhance learning from profession knowledge in an organisation. We apply our approach on knitting industry. ",Learning from Profession Knowledge: Application on Knitting " Constraint solvers are complex pieces of software which require many design decisions to be made by the implementer based on limited information. These decisions affect the performance of the finished solver significantly. Once a design decision has been made, it cannot easily be reversed, although a different decision may be more appropriate for a particular problem. We investigate using machine learning to make these decisions automatically depending on the problem to solve. We use the alldifferent constraint as a case study. Our system is capable of making non-trivial, multi-level decisions that improve over always making a default choice and can be implemented as part of a general-purpose constraint solver. ","Machine learning for constraint solver design -- A case study for the alldifferent constraint" " Constraint problems can be trivially solved in parallel by exploring different branches of the search tree concurrently. Previous approaches have focused on implementing this functionality in the solver, more or less transparently to the user. We propose a new approach, which modifies the constraint model of the problem. An existing model is split into new models with added constraints that partition the search space. Optionally, additional constraints are imposed that rule out the search already done. The advantages of our approach are that it can be implemented easily, computations can be stopped and restarted, moved to different machines and indeed solved on machines which are not able to communicate with each other at all. ",Distributed solving through model splitting " In many applications involving multi-media data, the definition of similarity between items is integral to several key tasks, e.g., nearest-neighbor retrieval, classification, and recommendation. Data in such regimes typically exhibits multiple modalities, such as acoustic and visual content of video. Integrating such heterogeneous data to form a holistic similarity space is therefore a key challenge to be overcome in many real-world applications. We present a novel multiple kernel learning technique for integrating heterogeneous data into a single, unified similarity space. Our algorithm learns an optimal ensemble of kernel transfor- mations which conform to measurements of human perceptual similarity, as expressed by relative comparisons. To cope with the ubiquitous problems of subjectivity and inconsistency in multi- media similarity, we develop graph-based techniques to filter similarity measurements, resulting in a simplified and robust training procedure. ",Learning Multi-modal Similarity " Consideration of the primal and dual problems together leads to important new insights into the characteristics of boosting algorithms. In this work, we propose a general framework that can be used to design new boosting algorithms. A wide variety of machine learning problems essentially minimize a regularized risk functional. We show that the proposed boosting framework, termed CGBoost, can accommodate various loss functions and different regularizers in a totally-corrective optimization fashion. We show that, by solving the primal rather than the dual, a large body of totally-corrective boosting algorithms can actually be efficiently solved and no sophisticated convex optimization solvers are needed. We also demonstrate that some boosting algorithms like AdaBoost can be interpreted in our framework--even their optimization is not totally corrective. We empirically show that various boosting algorithms based on the proposed framework perform similarly on the UCIrvine machine learning datasets [1] that we have used in the experiments. ",Totally Corrective Boosting for Regularized Risk Minimization " Max Restricted Path Consistency (maxRPC) is a local consistency for binary constraints that can achieve considerably stronger pruning than arc consistency. However, existing maxRRC algorithms suffer from overheads and redundancies as they can repeatedly perform many constraint checks without triggering any value deletions. In this paper we propose techniques that can boost the performance of maxRPC algorithms. These include the combined use of two data structures to avoid many redundant constraint checks, and heuristics for the efficient ordering and execution of certain operations. Based on these, we propose two closely related algorithms. The first one which is a maxRPC algorithm with optimal O(end^3) time complexity, displays good performance when used stand-alone, but is expensive to apply during search. The second one approximates maxRPC and has O(en^2d^4) time complexity, but a restricted version with O(end^4) complexity can be very efficient when used during search. Both algorithms have O(ed) space complexity. Experimental results demonstrate that the resulting methods constantly outperform previous algorithms for maxRPC, often by large margins, and constitute a more than viable alternative to arc consistency on many problems. ",Improving the Performance of maxRPC " The technical report presents a generic exact solution approach for minimizing the project duration of the resource-constrained project scheduling problem with generalized precedences (Rcpsp/max). The approach uses lazy clause generation, i.e., a hybrid of finite domain and Boolean satisfiability solving, in order to apply nogood learning and conflict-driven search on the solution generation. Our experiments show the benefit of lazy clause generation for finding an optimal solutions and proving its optimality in comparison to other state-of-the-art exact and non-exact methods. The method is highly robust: it matched or bettered the best known results on all of the 2340 instances we examined except 3, according to the currently available data on the PSPLib. Of the 631 open instances in this set it closed 573 and improved the bounds of 51 of the remaining 58 instances. ","Solving the Resource Constrained Project Scheduling Problem with Generalized Precedences by Lazy Clause Generation" " The search strategy of a CP solver is determined by the variable and value ordering heuristics it employs and by the branching scheme it follows. Although the effects of variable and value ordering heuristics on search effort have been widely studied, the effects of different branching schemes have received less attention. In this paper we study this effect through an experimental evaluation that includes standard branching schemes such as 2-way, d-way, and dichotomic domain splitting, as well as variations of set branching where branching is performed on sets of values. We also propose and evaluate a generic approach to set branching where the partition of a domain into sets is created using the scores assigned to values by a value ordering heuristic, and a clustering algorithm from machine learning. Experimental results demonstrate that although exponential differences between branching schemes, as predicted in theory between 2-way and d-way branching, are not very common, still the choice of branching scheme can make quite a difference on certain classes of problems. Set branching methods are very competitive with 2-way branching and outperform it on some problem classes. A statistical analysis of the results reveals that our generic clustering-based set branching method is the best among the methods compared. ",Experimental Evaluation of Branching Schemes for the CSP " In this paper, we address the problem of creating believable agents (virtual characters) in video games. We consider only one meaning of believability, ``giving the feeling of being controlled by a player'', and outline the problem of its evaluation. We present several models for agents in games which can produce believable behaviours, both from industry and research. For high level of believability, learning and especially imitation learning seems to be the way to go. We make a quick overview of different approaches to make video games' agents learn from players. To conclude we propose a two-step method to develop new models for believable agents. First we must find the criteria for believability for our application and define an evaluation method. Then the model and the learning algorithm can be designed. ","The Challenge of Believability in Video Games: Definitions, Agents Models and Imitation Learning" " Classic evaluation methods of believable agents are time-consuming because they involve many human to judge agents. They are well suited to validate work on new believable behaviours models. However, during the implementation, numerous experiments can help to improve agents' believability. We propose a method which aim at assessing how much an agent's behaviour looks like humans' behaviours. By representing behaviours with vectors, we can store data computed for humans and then evaluate as many agents as needed without further need of humans. We present a test experiment which shows that even a simple evaluation following our method can reveal differences between quite believable agents and humans. This method seems promising although, as shown in our experiment, results' analysis can be difficult. ","Automatable Evaluation Method Oriented toward Behaviour Believability for Video Games" " In this short paper I briefly discuss 3D war Game based on artificial intelligence concepts called AI WAR. Going in to the details, I present the importance of CAICL language and how this language is used in AI WAR. Moreover I also present a designed and implemented 3D War Cybug for AI WAR using CAICL and discus the implemented strategy to defeat its enemies during the game life. ",AI 3D Cybug Gaming " Levesque introduced the notion of only-knowing to precisely capture the beliefs of a knowledge base. He also showed how only-knowing can be used to formalize non-monotonic behavior within a monotonic logic. Despite its appeal, all attempts to extend only-knowing to the many agent case have undesirable properties. A belief model by Halpern and Lakemeyer, for instance, appeals to proof-theoretic constructs in the semantics and needs to axiomatize validity as part of the logic. It is also not clear how to generalize their ideas to a first-order case. In this paper, we propose a new account of multi-agent only-knowing which, for the first time, has a natural possible-world semantics for a quantified language with equality. We then provide, for the propositional fragment, a sound and complete axiomatization that faithfully lifts Levesque's proof theory to the many agent case. We also discuss comparisons to the earlier approach by Halpern and Lakemeyer. ",Multi-Agent Only-Knowing Revisited " In this paper we present an optimal Bangla Keyboard Layout, which distributes the load equally on both hands so that maximizing the ease and minimizing the effort. Bangla alphabet has a large number of letters, for this it is difficult to type faster using Bangla keyboard. Our proposed keyboard will maximize the speed of operator as they can type with both hands parallel. Here we use the association rule of data mining to distribute the Bangla characters in the keyboard. First, we analyze the frequencies of data consisting of monograph, digraph and trigraph, which are derived from data wire-house, and then used association rule of data mining to distribute the Bangla characters in the layout. Finally, we propose a Bangla Keyboard Layout. Experimental results on several keyboard layout shows the effectiveness of the proposed approach with better performance. ",Optimal Bangla Keyboard Layout using Association Rule of Data Mining " This paper presents an optimal Bangla Keyboard Layout, which distributes the load equally on both hands so that maximizing the ease and minimizing the effort. Bangla alphabet has a large number of letters, for this it is difficult to type faster using Bangla keyboard. Our proposed keyboard will maximize the speed of operator as they can type with both hands parallel. Here we use the association rule of data mining to distribute the Bangla characters in the keyboard. First, we analyze the frequencies of data consisting of monograph, digraph and trigraph, which are derived from data wire-house, and then used association rule of data mining to distribute the Bangla characters in the layout. Experimental results on several data show the effectiveness of the proposed approach with better performance. ",Optimal Bangla Keyboard Layout using Data Mining Technique " Bangla alphabet has a large number of letters, for this it is complicated to type faster using Bangla keyboard. The proposed keyboard will maximize the speed of operator as they can type with both hands parallel. Association rule of data mining to distribute the Bangla characters in the keyboard is used here. The frequencies of data consisting of monograph, digraph and trigraph are analyzed, which are derived from data wire-house, and then used association rule of data mining to distribute the Bangla characters in the layout. Experimental results on several data show the effectiveness of the proposed approach with better performance. This paper presents an optimal Bangla Keyboard Layout, which distributes the load equally on both hands so that maximizing the ease and minimizing the effort. ",The Most Advantageous Bangla Keyboard Layout Using Data Mining Technique " Support Vector Machines (SVMs) are popular tools for data mining tasks such as classification, regression, and density estimation. However, original SVM (C-SVM) only considers local information of data points on or over the margin. Therefore, C-SVM loses robustness. To solve this problem, one approach is to translate (i.e., to move without rotation or change of shape) the hyperplane according to the distribution of the entire data. But existing work can only be applied for 1-D case. In this paper, we propose a simple and efficient method called General Scaled SVM (GS-SVM) to extend the existing approach to multi-dimensional case. Our method translates the hyperplane according to the distribution of data projected on the normal vector of the hyperplane. Compared with C-SVM, GS-SVM has better performance on several data sets. ",General Scaled Support Vector Machines " The problem of measuring similarity of graphs and their nodes is important in a range of practical problems. There is a number of proposed measures, some of them being based on iterative calculation of similarity between two graphs and the principle that two nodes are as similar as their neighbors are. In our work, we propose one novel method of that sort, with a refined concept of similarity of two nodes that involves matching of their neighbors. We prove convergence of the proposed method and show that it has some additional desirable properties that, to our knowledge, the existing methods lack. We illustrate the method on two specific problems and empirically compare it to other methods. ",Measuring Similarity of Graphs and their Nodes by Neighbor Matching " The paper proposes artificial intelligence technique called hill climbing to find numerical solutions of Diophantine Equations. Such equations are important as they have many applications in fields like public key cryptography, integer factorization, algebraic curves, projective curves and data dependency in super computers. Importantly, it has been proved that there is no general method to find solutions of such equations. This paper is an attempt to find numerical solutions of Diophantine equations using steepest ascent version of Hill Climbing. The method, which uses tree representation to depict possible solutions of Diophantine equations, adopts a novel methodology to generate successors. The heuristic function used help to make the process of finding solution as a minimization process. The work illustrates the effectiveness of the proposed methodology using a class of Diophantine equations given by a1. x1 p1 + a2. x2 p2 + ...... + an . xn pn = N where ai and N are integers. The experimental results validate that the procedure proposed is successful in finding solutions of Diophantine Equations with sufficiently large powers and large number of variables. ",Steepest Ascent Hill Climbing For A Mathematical Problem " This paper is mainly concerned with the question of how to decompose multiclass classification problems into binary subproblems. We extend known Jensen-Shannon bounds on the Bayes risk of binary problems to hierarchical multiclass problems and use these bounds to develop a heuristic procedure for constructing hierarchical multiclass decomposition for multinomials. We test our method and compare it to the well known ""all-pairs"" decomposition. Our tests are performed using a new authorship determination benchmark test of machine learning authors. The new method consistently outperforms the all-pairs decomposition when the number of classes is small and breaks even on larger multiclass problems. Using both methods, the classification accuracy we achieve, using an SVM over a feature set consisting of both high frequency single tokens and high frequency token-pairs, appears to be exceptionally high compared to known results in authorship determination. ","Hierarchical Multiclass Decompositions with Application to Authorship Determination" " The iDian (previously named as the Operation Agent System) is a framework designed to enable computer users to operate software in natural language. Distinct from current speech-recognition systems, our solution supports format-free combinations of orders, and is open to both developers and customers. We used a multi-layer structure to build the entire framework, approached rule-based natural language processing, and implemented demos narrowing down to Windows, text-editing and a few other applications. This essay will firstly give an overview of the entire system, and then scrutinize the functions and structure of the system, and finally discuss the prospective de-velopment, esp. on-line interaction functions. ",Introduction to the iDian " In this work we present a protocol for self-synchronized duty-cycling in wireless sensor networks with energy harvesting capabilities. The protocol is implemented in Wiselib, a library of generic algorithms for sensor networks. Simulations are conducted with the sensor network simulator Shawn. They are based on the specifications of real hardware known as iSense sensor nodes. The experimental results show that the proposed mechanism is able to adapt to changing energy availabilities. Moreover, it is shown that the system is very robust against packet loss. ","A Protocol for Self-Synchronized Duty-Cycling in Sensor Networks: Generic Implementation in Wiselib" " Active Learning Method (ALM) is a soft computing method used for modeling and control based on fuzzy logic. All operators defined for fuzzy sets must serve as either fuzzy S-norm or fuzzy T-norm. Despite being a powerful modeling method, ALM does not possess operators which serve as S-norms and T-norms which deprive it of a profound analytical expression/form. This paper introduces two new operators based on morphology which satisfy the following conditions: First, they serve as fuzzy S-norm and T-norm. Second, they satisfy Demorgans law, so they complement each other perfectly. These operators are investigated via three viewpoints: Mathematics, Geometry and fuzzy logic. ",New S-norm and T-norm Operators for Active Learning Method " Substitutability, interchangeability and related concepts in Constraint Programming were introduced approximately twenty years ago and have given rise to considerable subsequent research. We survey this work, classify, and relate the different concepts, and indicate directions for future work, in particular with respect to making connections with research into symmetry breaking. This paper is a condensed version of a larger work in progress. ",A Partial Taxonomy of Substitutability and Interchangeability " Concept drift refers to a non stationary learning problem over time. The training and the application data often mismatch in real life problems. In this report we present a context of concept drift problem 1. We focus on the issues relevant to adaptive training set formation. We present the framework and terminology, and formulate a global picture of concept drift learners design. We start with formalizing the framework for the concept drifting data in Section 1. In Section 2 we discuss the adaptivity mechanisms of the concept drift learners. In Section 3 we overview the principle mechanisms of concept drift learners. In this chapter we give a general picture of the available algorithms and categorize them based on their properties. Section 5 discusses the related research fields and Section 5 groups and presents major concept drift applications. This report is intended to give a bird's view of concept drift research field, provide a context of the research and position it within broad spectrum of research fields and applications. ",Learning under Concept Drift: an Overview " We introduce a new perspective on spectral dimensionality reduction which views these methods as Gaussian Markov random fields (GRFs). Our unifying perspective is based on the maximum entropy principle which is in turn inspired by maximum variance unfolding. The resulting model, which we call maximum entropy unfolding (MEU) is a nonlinear generalization of principal component analysis. We relate the model to Laplacian eigenmaps and isomap. We show that parameter fitting in the locally linear embedding (LLE) is approximate maximum likelihood MEU. We introduce a variant of LLE that performs maximum likelihood exactly: Acyclic LLE (ALLE). We show that MEU and ALLE are competitive with the leading spectral approaches on a robot navigation visualization and a human motion capture data set. Finally the maximum likelihood perspective allows us to introduce a new approach to dimensionality reduction based on L1 regularization of the Gaussian random field via the graphical lasso. ","A Unifying Probabilistic Perspective for Spectral Dimensionality Reduction: Insights and New Models" " Human can be distinguished by different limb movements and unique ground reaction force. Cumulative foot pressure image is a 2-D cumulative ground reaction force during one gait cycle. Although it contains pressure spatial distribution information and pressure temporal distribution information, it suffers from several problems including different shoes and noise, when putting it into practice as a new biometric for pedestrian identification. In this paper, we propose a hierarchical translation-invariant representation for cumulative foot pressure images, inspired by the success of Convolutional deep belief network for digital classification. Key contribution in our approach is discriminative hierarchical sparse coding scheme which helps to learn useful discriminative high-level visual features. Based on the feature representation of cumulative foot pressure images, we develop a pedestrian recognition system which is invariant to three different shoes and slight local shape change. Experiments are conducted on a proposed open dataset that contains more than 2800 cumulative foot pressure images from 118 subjects. Evaluations suggest the effectiveness of the proposed method and the potential of cumulative foot pressure images as a biometric. ",Translation-Invariant Representation for Cumulative Foot Pressure Images " An important issue in Qualitative Spatial Reasoning is the representation of relative direction. In this paper we present simple geometric rules that enable reasoning about relative direction between oriented points. This framework, the Oriented Point Algebra OPRA_m, has a scalable granularity m. We develop a simple algorithm for computing the OPRA_m composition tables and prove its correctness. Using a composition table, algebraic closure for a set of OPRA statements is sufficient to solve spatial navigation tasks. And it turns out that scalable granularity is useful in these navigation tasks. ","Qualitative Reasoning about Relative Direction on Adjustable Levels of Granularity" " In the article a turn-based game played on four computers connected via network is investigated. There are three computers with natural intelligence and one with artificial intelligence. Game table is seen by each player's own view point in all players' monitors. Domino pieces are three dimensional. For distributed systems TCP/IP protocol is used. In order to get 3D image, Microsoft XNA technology is applied. Domino 101 game is nondeterministic game that is result of the game depends on the initial random distribution of the pieces. Number of the distributions is equal to the multiplication of following combinations: . Moreover, in this game that is played by four people, players are divided into 2 pairs. Accordingly, we cannot predict how the player uses the dominoes that is according to the dominoes of his/her partner or according to his/her own dominoes. The fact that the natural intelligence can be a player in any level affects the outcome. These reasons make it difficult to develop an AI. In the article four levels of AI are developed. The AI in the first level is equivalent to the intelligence of a child who knows the rules of the game and recognizes the numbers. The AI in this level plays if it has any domino, suitable to play or says pass. In most of the games which can be played on the internet, the AI does the same. But the AI in the last level is a master player, and it can develop itself according to its competitors' levels. ",A Distributed AI Aided 3D Domino Game " In this paper it is considered rule reduct generation problem, based on Rough Set Theory. Rule Reduct Generation (RG) and Modified Rule Generation (MRG) algorithms are well-known. Alternative to these algorithms Pruning Algorithm of Generation A Minimal Set of Rule Reducts, or briefly Pruning Rule Generation (PRG) algorithm is developed. PRG algorithm uses tree structured data type. PRG algorithm is compared with RG and MRG algorithms ","Prunnig Algorithm of Generation a Minimal Set of Rule Reducts Based on Rough Set Theory" " The cardinal direction calculus (CDC) proposed by Goyal and Egenhofer is a very expressive qualitative calculus for directional information of extended objects. Early work has shown that consistency checking of complete networks of basic CDC constraints is tractable while reasoning with the CDC in general is NP-hard. This paper shows, however, if allowing some constraints unspecified, then consistency checking of possibly incomplete networks of basic CDC constraints is already intractable. This draws a sharp boundary between the tractable and intractable subclasses of the CDC. The result is achieved by a reduction from the well-known 3-SAT problem. ","Reasoning about Cardinal Directions between Extended Objects: The Hardness Result" " Imitation learning in robots, also called programing by demonstration, has made important advances in recent years, allowing humans to teach context dependant motor skills/tasks to robots. We propose to extend the usual contexts investigated to also include acoustic linguistic expressions that might denote a given motor skill, and thus we target joint learning of the motor skills and their potential acoustic linguistic name. In addition to this, a modification of a class of existing algorithms within the imitation learning framework is made so that they can handle the unlabeled demonstration of several tasks/motor primitives without having to inform the imitator of what task is being demonstrated or what the number of tasks are, which is a necessity for language learning, i.e; if one wants to teach naturally an open number of new motor skills together with their acoustic names. Finally, a mechanism for detecting whether or not linguistic input is relevant to the task is also proposed, and our architecture also allows the robot to find the right framing for a given identified motor primitive. With these additions it becomes possible to build an imitator that bridges the gap between imitation learning and language learning by being able to learn linguistic expressions using methods from the imitation learning community. In this sense the imitator can learn a word by guessing whether a certain speech pattern present in the context means that a specific task is to be executed. The imitator is however not assumed to know that speech is relevant and has to figure this out on its own by looking at the demonstrations: indeed, the architecture allows the robot to transparently also learn tasks which should not be triggered by an acoustic word, but for example by the color or position of an object or a gesture made by someone in the environment. To demonstrate this ability to find the ... ","Imitation learning of motor primitives and language bootstrapping in robots" " The aim of this study is to show the importance of two classification techniques, viz. decision tree and clustering, in prediction of learning disabilities (LD) of school-age children. LDs affect about 10 percent of all children enrolled in schools. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. Decision trees and clustering are powerful and popular tools used for classification and prediction in Data mining. Different rules extracted from the decision tree are used for prediction of learning disabilities. Clustering is the assignment of a set of observations into subsets, called clusters, which are useful in finding the different signs and symptoms (attributes) present in the LD affected child. In this paper, J48 algorithm is used for constructing the decision tree and K-means algorithm is used for creating the clusters. By applying these classification techniques, LD in any child can be identified. ","Significance of Classification Techniques in Prediction of Learning Disabilities" " The task of verifying the compatibility between interacting web services has traditionally been limited to checking the compatibility of the interaction protocol in terms of message sequences and the type of data being exchanged. Since web services are developed largely in an uncoordinated way, different services often use independently developed ontologies for the same domain instead of adhering to a single ontology as standard. In this work we investigate the approaches that can be taken by the server to verify the possibility to reach a state with semantically inconsistent results during the execution of a protocol with a client, if the client ontology is published. Often database is used to store the actual data along with the ontologies instead of storing the actual data as a part of the ontology description. It is important to observe that at the current state of the database the semantic conflict state may not be reached even if the verification done by the server indicates the possibility of reaching a conflict state. A relational algebra based decision procedure is also developed to incorporate the current state of the client and the server databases in the overall verification procedure. ","Detecting Ontological Conflicts in Protocols between Semantic Web Services" " The paper proposes a numerically stable recursive algorithm for the exact computation of the linear-chain conditional random field gradient. It operates as a forward algorithm over the log-domain expectation semiring and has the purpose of enhancing memory efficiency when applied to long observation sequences. Unlike the traditional algorithm based on the forward-backward recursions, the memory complexity of our algorithm does not depend on the sequence length. The experiments on real data show that it can be useful for the problems which deal with long sequences. ","Gradient Computation In Linear-Chain Conditional Random Fields Using The Entropy Message Passing Algorithm" " In this paper, a new reinforcement learning approach is proposed which is based on a powerful concept named Active Learning Method (ALM) in modeling. ALM expresses any multi-input-single-output system as a fuzzy combination of some single-input-singleoutput systems. The proposed method is an actor-critic system similar to Generalized Approximate Reasoning based Intelligent Control (GARIC) structure to adapt the ALM by delayed reinforcement signals. Our system uses Temporal Difference (TD) learning to model the behavior of useful actions of a control system. The goodness of an action is modeled on Reward- Penalty-Plane. IDS planes will be updated according to this plane. It is shown that the system can learn with a predefined fuzzy system or without it (through random actions). ",Reinforcement Learning Based on Active Learning Method " An effective approach for energy conservation in wireless sensor networks is scheduling sleep intervals for extraneous nodes while the remaining nodes stay active to provide continuous service. For the sensor network to operate successfully the active nodes must maintain both sensing coverage and network connectivity, It proved before if the communication range of nodes is at least twice the sensing range, complete coverage of a convex area implies connectivity among the working set of nodes. In this paper we consider a rectangular region A = a *b, such that R a R b s s {\pounds}, {\pounds}, where s R is the sensing range of nodes. and put a constraint on minimum allowed distance between nodes(s). according to this constraint we present a new lower bound for communication range relative to sensing range of sensors(s 2 + 3 *R) that complete coverage of considered area implies connectivity among the working set of nodes; also we present a new distribution method, that satisfy our constraint. ",A New Sufficient Condition for 1-Coverage to Imply Connectivity " In this paper, we propose a new approach for recommender systems based on target tracking by Kalman filtering. We assume that users and their seen resources are vectors in the multidimensional space of the categories of the resources. Knowing this space, we propose an algorithm based on a Kalman filter to track users and to predict the best prediction of their future position in the recommendation space. ","Target tracking in the recommender space: Toward a new recommender system based on Kalman filtering" " We investigate projection methods, for evaluating a linear approximation of the value function of a policy in a Markov Decision Process context. We consider two popular approaches, the one-step Temporal Difference fix-point computation (TD(0)) and the Bellman Residual (BR) minimization. We describe examples, where each method outperforms the other. We highlight a simple relation between the objective function they minimize, and show that while BR enjoys a performance guarantee, TD(0) does not in general. We then propose a unified view in terms of oblique projections of the Bellman equation, which substantially simplifies and extends the characterization of (schoknecht,2002) and the recent analysis of (Yu & Bertsekas, 2008). Eventually, we describe some simulations that suggest that if the TD(0) solution is usually slightly better than the BR solution, its inherent numerical instability makes it very bad in some cases, and thus worse on average. ","Should one compute the Temporal Difference fix point or minimize the Bellman Residual? The unified oblique projection view" " Graph coloring, also known as vertex coloring, considers the problem of assigning colors to the nodes of a graph such that adjacent nodes do not share the same color. The optimization version of the problem concerns the minimization of the number of used colors. In this paper we deal with the problem of finding valid colorings of graphs in a distributed way, that is, by means of an algorithm that only uses local information for deciding the color of the nodes. Such algorithms prescind from any central control. Due to the fact that quite a few practical applications require to find colorings in a distributed way, the interest in distributed algorithms for graph coloring has been growing during the last decade. As an example consider wireless ad-hoc and sensor networks, where tasks such as the assignment of frequencies or the assignment of TDMA slots are strongly related to graph coloring. The algorithm proposed in this paper is inspired by the calling behavior of Japanese tree frogs. Male frogs use their calls to attract females. Interestingly, groups of males that are located nearby each other desynchronize their calls. This is because female frogs are only able to correctly localize the male frogs when their calls are not too close in time. We experimentally show that our algorithm is very competitive with the current state of the art, using different sets of problem instances and comparing to one of the most competitive algorithms from the literature. ","Distributed Graph Coloring: An Approach Based on the Calling Behavior of Japanese Tree Frogs" " This paper describes an application of Bayesian programming to the control of an autonomous avatar in a multiplayer role-playing game (the example is based on World of Warcraft). We model a particular task, which consists of choosing what to do and to select which target in a situation where allies and foes are present. We explain the model in Bayesian programming and show how we could learn the conditional probabilities from data gathered during human-played sessions. ",Bayesian Modeling of a Human MMORPG Player " We present a probabilistic logic programming framework to reinforcement learning, by integrating reinforce-ment learning, in POMDP environments, with normal hybrid probabilistic logic programs with probabilistic answer set seman-tics, that is capable of representing domain-specific knowledge. We formally prove the correctness of our approach. We show that the complexity of finding a policy for a reinforcement learning problem in our approach is NP-complete. In addition, we show that any reinforcement learning problem can be encoded as a classical logic program with answer set semantics. We also show that a reinforcement learning problem can be encoded as a SAT problem. We present a new high level action description language that allows the factored representation of POMDP. Moreover, we modify the original model of POMDP so that it be able to distinguish between knowledge producing actions and actions that change the environment. ","Reinforcement Learning in Partially Observable Markov Decision Processes using Hybrid Probabilistic Logic Programs" " Biometrics is the science and technology of measuring and analyzing biological data of human body, extracting a feature set from the acquired data, and comparing this set against to the template set in the database. Experimental studies show that Unimodal biometric systems had many disadvantages regarding performance and accuracy. Multimodal biometric systems perform better than unimodal biometric systems and are popular even more complex also. We examine the accuracy and performance of multimodal biometric authentication systems using state of the art Commercial Off- The-Shelf (COTS) products. Here we discuss fingerprint and face biometric systems, decision and fusion techniques used in these systems. We also discuss their advantage over unimodal biometric systems. ",Multimodal Biometric Systems - Study to Improve Accuracy and Performance " Uncertainty of decisions in safety-critical engineering applications can be estimated on the basis of the Bayesian Markov Chain Monte Carlo (MCMC) technique of averaging over decision models. The use of decision tree (DT) models assists experts to interpret causal relations and find factors of the uncertainty. Bayesian averaging also allows experts to estimate the uncertainty accurately when a priori information on the favored structure of DTs is available. Then an expert can select a single DT model, typically the Maximum a Posteriori model, for interpretation purposes. Unfortunately, a priori information on favored structure of DTs is not always available. For this reason, we suggest a new prior on DTs for the Bayesian MCMC technique. We also suggest a new procedure of selecting a single DT and describe an application scenario. In our experiments on the Short-Term Conflict Alert data our technique outperforms the existing Bayesian techniques in predictive accuracy of the selected single DTs. ","A Bayesian Methodology for Estimating Uncertainty of Decisions in Safety-Critical Systems" " We examine the practicality for a user of using Answer Set Programming (ASP) for representing logical formalisms. We choose as an example a formalism aiming at capturing causal explanations from causal information. We provide an implementation, showing the naturalness and relative efficiency of this translation job. We are interested in the ease for writing an ASP program, in accordance with the claimed ``declarative'' aspect of ASP. Limitations of the earlier systems (poor data structure and difficulty in reusing pieces of programs) made that in practice, the ``declarative aspect'' was more theoretical than practical. We show how recent improvements in working ASP systems facilitate a lot the translation, even if a few improvements could still be useful. ",Using ASP with recent extensions for causal explanations " There are a huge number of problems, from various areas, being solved by reducing them to SAT. However, for many applications, translation into SAT is performed by specialized, problem-specific tools. In this paper we describe a new system for uniform solving of a wide class of problems by reducing them to SAT. The system uses a new specification language URSA that combines imperative and declarative programming paradigms. The reduction to SAT is defined precisely by the semantics of the specification language. The domain of the approach is wide (e.g., many NP-complete problems can be simply specified and then solved by the system) and there are problems easily solvable by the proposed system, while they can be hardly solved by using other programming languages or constraint programming systems. So, the system can be seen not only as a tool for solving problems by reducing them to SAT, but also as a general-purpose constraint solving system (for finite domains). In this paper, we also describe an open-source implementation of the described approach. The performed experiments suggest that the system is competitive to state-of-the-art related modelling systems. ",URSA: A System for Uniform Reduction to SAT " SNOMED Clinical Terms (SNOMED CT) is one of the most widespread ontologies in the life sciences, with more than 300,000 concepts and relationships, but is distributed with no associated software tools. In this paper we present MySNOM, a web-based SNOMED CT browser. MySNOM allows organizations to browse their own distribution of SNOMED CT under a controlled environment, focuses on navigating using the structure of SNOMED CT, and has diagramming capabilities. ",Are SNOMED CT Browsers Ready for Institutions? Introducing MySNOM " In this paper we dealt with the comparison and linking between lexical resources with domain knowledge provided by ontologies. It is one of the issues for the combination of the Semantic Web Ontologies and Text Mining. We investigated the relations between the linguistics oriented and domain-specific semantics, by associating the GO biological process concepts to the FrameNet semantic frames. The result shows the gaps between the linguistics-oriented and domain-specific semantics on the classification of events and the grouping of target words. The result provides valuable information for the improvement of domain ontologies supporting for text mining systems. And also, it will result in benefits to language understanding technology. ","A study on the relation between linguistics-oriented and domain-specific semantics" " Virtual e-Science infrastructures supporting Web-based scientific workflows are an example for knowledge-intensive collaborative and weakly-structured processes where the interaction with the human scientists during process execution plays a central role. In this paper we propose the lightweight dynamic user-friendly interaction with humans during execution of scientific workflows via the low-barrier approach of Semantic Wikis as an intuitive interface for non-technical scientists. Our Process Makna Semantic Wiki system is a novel combination of an business process management system adapted for scientific workflows with a Corporate Semantic Web Wiki user interface supporting knowledge intensive human interaction tasks during scientific workflow execution. ",Process Makna - A Semantic Wiki for Scientific Workflows " ChemgaPedia is a multimedia, webbased eLearning service platform that currently contains about 18.000 pages organized in 1.700 chapters covering the complete bachelor studies in chemistry and related topics of chemistry, pharmacy, and life sciences. The eLearning encyclopedia contains some 25.000 media objects and the eLearning platform provides services such as virtual and remote labs for experiments. With up to 350.000 users per month the platform is the most frequently used scientific educational service in the German spoken Internet. In this demo we show the benefit of mapping the static eLearning contents of ChemgaPedia to a Linked Data representation for Semantic Chemistry which allows for generating dynamic eLearning paths tailored to the semantic profiles of the users. ","Use of semantic technologies for the development of a dynamic trajectories generator in a Semantic Chemistry eLearning platform" " This research applies ideas from argumentation theory in the context of semantic wikis, aiming to provide support for structured-large scale argumentation between human agents. The implemented prototype is exemplified by modelling the MMR vaccine controversy. ",Using Semantic Wikis for Structured Argument in Medical Domain " Creating a new Ontology: a Modular Approach ",Creating a new Ontology: a Modular Approach " Research in the Life Sciences depends on the integration of large, distributed and heterogeneous data sources and web services. The discovery of which of these resources are the most appropriate to solve a given task is a complex research question, since there is a large amount of plausible candidates and there is little, mostly unstructured, metadata to be able to decide among them.We contribute a semi-automatic approach,based on semantic techniques, to assist researchers in the discovery of the most appropriate web services to full a set of given requirements. ","A semantic approach for the requirement-driven discovery of web services in the Life Sciences" " Semantic wikis, wikis enhanced with Semantic Web technologies, are appropriate systems for community-authored knowledge models. They are particularly suitable for scientific collaboration. This paper details the design principles ofWikiBridge, a semantic wiki. ",Scientific Collaborations: principles of WikiBridge Design " We present Populous, a tool for gathering content with which to populate an ontology. Domain experts need to add content, that is often repetitive in its form, but without having to tackle the underlying ontological representation. Populous presents users with a table based form in which columns are constrained to take values from particular ontologies; the user can select a concept from an ontology via its meaningful label to give a value for a given entity attribute. Populated tables are mapped to patterns that can then be used to automatically generate the ontology's content. Populous's contribution is in the knowledge gathering stage of ontology development. It separates knowledge gathering from the conceptualisation and also separates the user from the standard ontology authoring environments. As a result, Populous can allow knowledge to be gathered in a straight-forward manner that can then be used to do mass production of ontology content. ",Populous: A tool for populating ontology templates " In this work, we develop an intelligent user interface that allows users to enter biomedical queries in a natural language, and that presents the answers (possibly with explanations if requested) in a natural language. We develop a rule layer over biomedical ontologies and databases, and use automated reasoners to answer queries considering relevant parts of the rule layer. ",Querying Biomedical Ontologies in Natural Language using Answer Set " There has been a long history of using fuzzy language equivalence to compare the behavior of fuzzy systems, but the comparison at this level is too coarse. Recently, a finer behavioral measure, bisimulation, has been introduced to fuzzy finite automata. However, the results obtained are applicable only to finite-state systems. In this paper, we consider bisimulation for general fuzzy systems which may be infinite-state or infinite-event, by modeling them as fuzzy transition systems. To help understand and check bisimulation, we characterize it in three ways by enumerating whole transitions, comparing individual transitions, and using a monotonic function. In addition, we address composition operations, subsystems, quotients, and homomorphisms of fuzzy transition systems and discuss their properties connected with bisimulation. The results presented here are useful for comparing the behavior of general fuzzy systems. In particular, this makes it possible to relate an infinite fuzzy system to a finite one, which is easier to analyze, with the same behavior. ",Bisimulations for fuzzy transition systems " Fuzzy automata have long been accepted as a generalization of nondeterministic finite automata. A closer examination, however, shows that the fundamental property---nondeterminism---in nondeterministic finite automata has not been well embodied in the generalization. In this paper, we introduce nondeterministic fuzzy automata with or without $\el$-moves and fuzzy languages recognized by them. Furthermore, we prove that (deterministic) fuzzy automata, nondeterministic fuzzy automata, and nondeterministic fuzzy automata with $\el$-moves are all equivalent in the sense that they recognize the same class of fuzzy languages. ",Nondeterministic fuzzy automata " The previous decade has brought a remarkable increase of the interest in applications that deal with querying and mining of time series data. Many of the research efforts in this context have focused on introducing new representation methods for dimensionality reduction or novel similarity measures for the underlying data. In the vast majority of cases, each individual work introducing a particular method has made specific claims and, aside from the occasional theoretical justifications, provided quantitative experimental observations. However, for the most part, the comparative aspects of these experiments were too narrowly focused on demonstrating the benefits of the proposed methods over some of the previously introduced ones. In order to provide a comprehensive validation, we conducted an extensive experimental study re-implementing eight different time series representations and nine similarity measures and their variants, and testing their effectiveness on thirty-eight time series data sets from a wide variety of application domains. In this paper, we give an overview of these different techniques and present our comparative experimental findings regarding their effectiveness. In addition to providing a unified validation of some of the existing achievements, our experiments also indicate that, in some cases, certain claims in the literature may be unduly optimistic. ","Experimental Comparison of Representation Methods and Distance Measures for Time Series Data" " In this paper, we propose a new approach for recommender systems based on target tracking by Kalman filtering. We assume that users and their seen resources are vectors in the multidimensional space of the categories of the resources. Knowing this space, we propose an algorithm based on a Kalman filter to track users and to predict the best prediction of their future position in the recommendation space. ","A new Recommender system based on target tracking: a Kalman Filter approach" " Knowledge is attributed to human whose problem-solving behavior is subjective and complex. In today's knowledge economy, the need to manage knowledge produced by a community of actors cannot be overemphasized. This is due to the fact that actors possess some level of tacit knowledge which is generally difficult to articulate. Problem-solving requires searching and sharing of knowledge among a group of actors in a particular context. Knowledge expressed within the context of a problem resolution must be capitalized for future reuse. In this paper, an approach that permits dynamic capitalization of relevant and reliable actors' knowledge in solving decision problem following Economic Intelligence process is proposed. Knowledge annotation method and temporal attributes are used for handling the complexity in the communication among actors and in contextualizing expressed knowledge. A prototype is built to demonstrate the functionalities of a collaborative Knowledge Management system based on this approach. It is tested with sample cases and the result showed that dynamic capitalization leads to knowledge validation hence increasing reliability of captured knowledge for reuse. The system can be adapted to various domains ","Dynamic Capitalization and Visualization Strategy in Collaborative Knowledge Management System for EI Process" " The shift from industrial economy to knowledge economy in today's world has revolutionalized strategic planning in organizations as well as their problem solving approaches. The point of focus today is knowledge and service production with more emphasis been laid on knowledge capital. Many organizations are investing on tools that facilitate knowledge sharing among their employees and they are as well promoting and encouraging collaboration among their staff in order to build the organization's knowledge capital with the ultimate goal of creating a lasting competitive advantage for their organizations. One of the current leading approaches used for solving organization's decision problem is the Economic Intelligence (EI) approach which involves interactions among various actors called EI actors. These actors collaborate to ensure the overall success of the decision problem solving process. In the course of the collaboration, the actors express knowledge which could be capitalized for future reuse. In this paper, we propose in the first place, an annotation model for knowledge elicitation among EI actors. Because of the need to build a knowledge capital, we also propose a dynamic knowledge capitalisation approach for managing knowledge produced by the actors. Finally, the need to manage the interactions and the interdependencies among collaborating EI actors, led to our third proposition which constitute an awareness mechanism for group work management. ","Dynamic Knowledge Capitalization through Annotation among Economic Intelligence Actors in a Collaborative Environment" " A new distance function dist(A,B) for fuzzy sets A and B is introduced. It is based on the descriptive complexity, i.e., the number of bits (on average) that are needed to describe an element in the symmetric difference of the two sets. The distance gives the amount of additional information needed to describe any one of the two sets given the other. We prove its mathematical properties and perform pattern clustering on data based on this distance. ",Descriptive-complexity based distance for fuzzy sets " Product take-back legislation forces manufacturers to bear the costs of collection and disposal of products that have reached the end of their useful lives. In order to reduce these costs, manufacturers can consider reuse, remanufacturing and/or recycling of components as an alternative to disposal. The implementation of such alternatives usually requires an appropriate reverse supply chain management. With the concepts of reverse supply chain are gaining popularity in practice, the use of artificial intelligence approaches in these areas is also becoming popular. As a result, the purpose of this paper is to give an overview of the recent publications concerning the application of artificial intelligence techniques to reverse supply chain with emphasis on certain types of product returns. ","Artificial Intelligence in Reverse Supply Chain Management: The State of the Art" " Intersections constitute one of the most dangerous elements in road systems. Traffic signals remain the most common way to control traffic at high-volume intersections and offer many opportunities to apply intelligent transportation systems to make traffic more efficient and safe. This paper describes an automated method to estimate the temporal exposure of road users crossing the conflict zone to lateral collision with road users originating from a different approach. This component is part of a larger system relying on video sensors to provide queue lengths and spatial occupancy that are used for real time traffic control and monitoring. The method is evaluated on data collected during a real world experiment. ","Automatic Estimation of the Exposure to Lateral Collision in Signalized Intersections using Video Sensors" " A conservative class of constraint satisfaction problems CSPs is a class for which membership is preserved under arbitrary domain reductions. Many well-known tractable classes of CSPs are conservative. It is well known that lexleader constraints may significantly reduce the number of solutions by excluding symmetric solutions of CSPs. We show that adding certain lexleader constraints to any instance of any conservative class of CSPs still allows us to find all solutions with a time which is polynomial between successive solutions. The time is polynomial in the total size of the instance and the additional lexleader constraints. It is well known that for complete symmetry breaking one may need an exponential number of lexleader constraints. However, in practice, the number of additional lexleader constraints is typically polynomial number in the size of the instance. For polynomially many lexleader constraints, we may in general not have complete symmetry breaking but polynomially many lexleader constraints may provide practically useful symmetry breaking -- and they sometimes exclude super-exponentially many solutions. We prove that for any instance from a conservative class, the time between finding successive solutions of the instance with polynomially many additional lexleader constraints is polynomial even in the size of the instance without lexleaderconstraints. ",Symmetry Breaking with Polynomial Delay " In the interpretation of experimental data, one is actually looking for plausible explanations. We look for a measure of plausibility, with which we can compare different possible explanations, and which can be combined when there are different sets of data. This is contrasted to the conventional measure for probabilities as well as to the proposed measure of possibilities. We define what characteristics this measure of plausibility should have. In getting to the conception of this measure, we explore the relation of plausibility to abductive reasoning, and to Bayesian probabilities. We also compare with the Dempster-Schaefer theory of evidence, which also has its own definition for plausibility. Abduction can be associated with biconditionality in inference rules, and this provides a platform to relate to the Collins-Michalski theory of plausibility. Finally, using a formalism for wiring logic onto Hopfield neural networks, we ask if this is relevant in obtaining this measure. ",Looking for plausibility " This article deals with Part family formation problem which is believed to be moderately complicated to be solved in polynomial time in the vicinity of Group Technology (GT). In the past literature researchers investigated that the part family formation techniques are principally based on production flow analysis (PFA) which usually considers operational requirements, sequences and time. Part Coding Analysis (PCA) is merely considered in GT which is believed to be the proficient method to identify the part families. PCA classifies parts by allotting them to different families based on their resemblances in: (1) design characteristics such as shape and size, and/or (2) manufacturing characteristics (machining requirements). A novel approach based on simulated annealing namely SAPFOCS is adopted in this study to develop effective part families exploiting the PCA technique. Thereafter Taguchi's orthogonal design method is employed to solve the critical issues on the subject of parameters selection for the proposed metaheuristic algorithm. The adopted technique is therefore tested on 5 different datasets of size 5 {\times} 9 to 27 {\times} 9 and the obtained results are compared with C-Linkage clustering technique. The experimental results reported that the proposed metaheuristic algorithm is extremely effective in terms of the quality of the solution obtained and has outperformed C-Linkage algorithm in most instances. ","SAPFOCS: a metaheuristic based approach to part family formation problems in group technology" " Using the notion of an elementary loop, Gebser and Schaub refined the theorem on loop formulas due to Lin and Zhao by considering loop formulas of elementary loops only. In this article, we reformulate their definition of an elementary loop, extend it to disjunctive programs, and study several properties of elementary loops, including how maximal elementary loops are related to minimal unfounded sets. The results provide useful insights into the stable model semantics in terms of elementary loops. For a nondisjunctive program, using a graph-theoretic characterization of an elementary loop, we show that the problem of recognizing an elementary loop is tractable. On the other hand, we show that the corresponding problem is {\sf coNP}-complete for a disjunctive program. Based on the notion of an elementary loop, we present the class of Head-Elementary-loop-Free (HEF) programs, which strictly generalizes the class of Head-Cycle-Free (HCF) programs due to Ben-Eliyahu and Dechter. Like an HCF program, an HEF program can be turned into an equivalent nondisjunctive program in polynomial time by shifting head atoms into the body. ",On Elementary Loops of Logic Programs " In this paper we introduce a method for extending binary qualitative direction calculi with adjustable granularity like OPRAm or the star calculus with a granular distance concept. This method is similar to the concept of extending points with an internal reference direction to get oriented points which are the basic entities in the OPRAm calculus. Even if the spatial objects are from a geometrical point of view infinitesimal small points locally available reference measures are attached. In the case of OPRAm, a reference direction is attached. The same principle works also with local reference distances which are called elevations. The principle of attaching references features to a point is called hidden feature attachment. ","Extending Binary Qualitative Direction Calculi with a Granular Distance Concept: Hidden Feature Attachment" " In video games, virtual characters' decision systems often use a simplified representation of the world. To increase both their autonomy and believability we want those characters to be able to learn this representation from human players. We propose to use a model called growing neural gas to learn by imitation the topology of the environment. The implementation of the model, the modifications and the parameters we used are detailed. Then, the quality of the learned representations and their evolution during the learning are studied using different measures. Improvements for the growing neural gas to give more information to the character's model are given in the conclusion. ","Learning a Representation of a Believable Virtual Character's Environment with an Imitation Algorithm" " Current work in planning with preferences assume that the user's preference models are completely specified and aim to search for a single solution plan. In many real-world planning scenarios, however, the user probably cannot provide any information about her desired plans, or in some cases can only express partial preferences. In such situations, the planner has to present not only one but a set of plans to the user, with the hope that some of them are similar to the plan she prefers. We first propose the usage of different measures to capture quality of plan sets that are suitable for such scenarios: domain-independent distance measures defined based on plan elements (actions, states, causal links) if no knowledge of the user's preferences is given, and the Integrated Convex Preference measure in case the user's partial preference is provided. We then investigate various heuristic approaches to find set of plans according to these measures, and present empirical results demonstrating the promise of our approach. ",Planning with Partial Preference Models " Performing effective preference-based data retrieval requires detailed and preferentially meaningful structurized information about the current user as well as the items under consideration. A common problem is that representations of items often only consist of mere technical attributes, which do not resemble human perception. This is particularly true for integral items such as movies or songs. It is often claimed that meaningful item features could be extracted from collaborative rating data, which is becoming available through social networking services. However, there is only anecdotal evidence supporting this claim; but if it is true, the extracted information could very valuable for preference-based data retrieval. In this paper, we propose a methodology to systematically check this common claim. We performed a preliminary investigation on a large collection of movie ratings and present initial evidence. ",Extracting Features from Ratings: The Role of Factor Models " We suggest a procedure that is relevant both to electronic performance and human psychology, so that the creative logic and the respect for human nature appear in a good agreement. The idea is to create an electronic card containing basic information about a person's psychological behavior in order to make it possible to quickly decide about the suitability of one for another. This ""psychological electronics"" approach could be tested via student projects. ","The ""psychological map of the brain"", as a personal information card (file), - a project for the student of the 21st century" " Meaning negotiation (MN) is the general process with which agents reach an agreement about the meaning of a set of terms. Artificial Intelligence scholars have dealt with the problem of MN by means of argumentations schemes, beliefs merging and information fusion operators, and ontology alignment but the proposed approaches depend upon the number of participants. In this paper, we give a general model of MN for an arbitrary number of agents, in which each participant discusses with the others her viewpoint by exhibiting it in an actual set of constraints on the meaning of the negotiated terms. We call this presentation of individual viewpoints an angle. The agents do not aim at forming a common viewpoint but, instead, at agreeing about an acceptable common angle. We analyze separately the process of MN by two agents (\emph{bilateral} or \emph{pairwise} MN) and by more than two agents (\emph{multiparty} MN), and we use game theoretic models to understand how the process develops in both cases: the models are Bargaining Game for bilateral MN and English Auction for multiparty MN. We formalize the process of reaching such an agreement by giving a deduction system that comprises of rules that are consistent and adequate for representing MN. ",Meaning Negotiation as Inference " Although some information-theoretic measures of uncertainty or granularity have been proposed in rough set theory, these measures are only dependent on the underlying partition and the cardinality of the universe, independent of the lower and upper approximations. It seems somewhat unreasonable since the basic idea of rough set theory aims at describing vague concepts by the lower and upper approximations. In this paper, we thus define new information-theoretic entropy and co-entropy functions associated to the partition and the approximations to measure the uncertainty and granularity of an approximation space. After introducing the novel notions of entropy and co-entropy, we then examine their properties. In particular, we discuss the relationship of co-entropies between different universes. The theoretical development is accompanied by illustrative numerical examples. ",Information-theoretic measures associated with rough set approximations " One of the main research areas in Artificial Intelligence is the coding of agents (programs) which are able to learn by themselves in any situation. This means that agents must be useful for purposes other than those they were created for, as, for example, playing chess. In this way we try to get closer to the pristine goal of Artificial Intelligence. One of the problems to decide whether an agent is really intelligent or not is the measurement of its intelligence, since there is currently no way to measure it in a reliable way. The purpose of this project is to create an interpreter that allows for the execution of several environments, including those which are generated randomly, so that an agent (a person or a program) can interact with them. Once the interaction between the agent and the environment is over, the interpreter will measure the intelligence of the agent according to the actions, states and rewards the agent has undergone inside the environment during the test. As a result we will be able to measure agents' intelligence in any possible environment, and to make comparisons between several agents, in order to determine which of them is the most intelligent. In order to perform the tests, the interpreter must be able to randomly generate environments that are really useful to measure agents' intelligence, since not any randomly generated environment will serve that purpose. ",An architecture for the evaluation of intelligent systems " The World Wide Web (WWW) allows the people to share the information (data) from the large database repositories globally. The amount of information grows billions of databases. We need to search the information will specialize tools known generically search engine. There are many of search engines available today, retrieving meaningful information is difficult. However to overcome this problem in search engines to retrieve meaningful information intelligently, semantic web technologies are playing a major role. In this paper we present survey on the search engine generations and the role of search engines in intelligent web and semantic search technologies. ",Intelligent Semantic Web Search Engines: A Brief Survey " The analysis of online least squares estimation is at the heart of many stochastic sequential decision making problems. We employ tools from the self-normalized processes to provide a simple and self-contained proof of a tail bound of a vector-valued martingale. We use the bound to construct a new tighter confidence sets for the least squares estimate. We apply the confidence sets to several online decision problems, such as the multi-armed and the linearly parametrized bandit problems. The confidence sets are potentially applicable to other problems such as sleeping bandits, generalized linear bandits, and other linear control problems. We improve the regret bound of the Upper Confidence Bound (UCB) algorithm of Auer et al. (2002) and show that its regret is with high-probability a problem dependent constant. In the case of linear bandits (Dani et al., 2008), we improve the problem dependent bound in the dimension and number of time steps. Furthermore, as opposed to the previous result, we prove that our bound holds for small sample sizes, and at the same time the worst case bound is improved by a logarithmic factor and the constant is improved. ","Online Least Squares Estimation with Self-Normalized Processes: An Application to Bandit Problems" " This paper presents a new multi-objective hybrid model that makes cooperation between the strength of research of neighborhood methods presented by the tabu search (TS) and the important exploration capacity of evolutionary algorithm. This model was implemented and tested in benchmark functions (ZDT1, ZDT2, and ZDT3), using a network of computers. ","Hybrid Model for Solving Multi-Objective Problems Using Evolutionary Algorithm and Tabu Search" " An algorithm running in O(1.1995n) is presented for counting models for exact satisfiability formulae(#XSAT). This is faster than the previously best algorithm which runs in O(1.2190n). In order to improve the efficiency of the algorithm, a new principle, i.e. the common literals principle, is addressed to simplify formulae. This allows us to eliminate more common literals. In addition, we firstly inject the resolution principles into solving #XSAT problem, and therefore this further improves the efficiency of the algorithm. ",New Worst-Case Upper Bound for #XSAT " Rules in logic programming encode information about mutual interdependencies between literals that is not captured by any of the commonly used semantics. This information becomes essential as soon as a program needs to be modified or further manipulated. We argue that, in these cases, a program should not be viewed solely as the set of its models. Instead, it should be viewed and manipulated as the set of sets of models of each rule inside it. With this in mind, we investigate and highlight relations between the SE-model semantics and individual rules. We identify a set of representatives of rule equivalence classes induced by SE-models, and so pinpoint the exact expressivity of this semantics with respect to a single rule. We also characterise the class of sets of SE-interpretations representable by a single rule. Finally, we discuss the introduction of two notions of equivalence, both stronger than strong equivalence [1] and weaker than strong update equivalence [2], which seem more suitable whenever the dependency information found in rules is of interest. ",Back and Forth Between Rules and SE-Models (Extended Version) " The global objective of this work is to provide practical optimization methods to companies involved in inventory routing problems, taking into account this new type of data. Also, companies are sometimes not able to deal with changing plans every period and would like to adopt regular structures for serving customers. ","Practical inventory routing: A problem definition and an optimization method" " Identification of critical or weak buses for a given operating condition is an important task in the load dispatch centre. It has become more vital in view of the threat of voltage instability leading to voltage collapse. This paper presents a fuzzy approach for ranking critical buses in a power system under normal and network contingencies based on Line Flow index and voltage profiles at load buses. The Line Flow index determines the maximum load that is possible to be connected to a bus in order to maintain stability before the system reaches its bifurcation point. Line Flow index (LF index) along with voltage profiles at the load buses are represented in Fuzzy Set notation. Further they are evaluated using fuzzy rules to compute Criticality Index. Based on this index, critical buses are ranked. The bus with highest rank is the weakest bus as it can withstand a small amount of load before causing voltage collapse. The proposed method is tested on Five Bus Test System. ","Fuzzy Approach to Critical Bus Ranking under Normal and Line Outage Contingencies" " The process-based semantic composition of Web Services is gaining a considerable momentum as an approach for the effective integration of distributed, heterogeneous, and autonomous applications. To compose Web Services semantically, we need an ontology. There are several ways of inserting semantics in Web Services. One of them consists of using description languages like OWL-S. In this paper, we introduce our work which consists in the proposition of a new model and the use of semantic matching technology for semantic and dynamic composition of ebXML business processes. ","An Agent Based Architecture (Using Planning) for Dynamic and Semantic Web Services Composition in an EBXML Context" " The problem of business-IT alignment is of widespread economic concern. As one way of addressing the problem, this paper describes an online system that functions as a kind of Wiki -- one that supports the collaborative writing and running of business and scientific applications, as rules in open vocabulary, executable English, using a browser. Since the rules are in English, they are indexed by Google and other search engines. This is useful when looking for rules for a task that one has in mind. The design of the system integrates the semantics of data, with a semantics of an inference method, and also with the meanings of English sentences. As such, the system has functionality that may be useful for the Rules, Logic, Proof and Trust requirements of the Semantic Web. The system accepts rules, and small numbers of facts, typed or copy-pasted directly into a browser. One can then run the rules, again using a browser. For larger amounts of data, the system uses information in the rules to automatically generate and run SQL over networked databases. From a few highly declarative rules, the system typically generates SQL that would be too complicated to write reliably by hand. However, the system can explain its results in step-by-step hypertexted English, at the business or scientific level As befits a Wiki, shared use of the system is free. ","A Wiki for Business Rules in Open Vocabulary, Executable English" " We propose a long-term memory design for artificial general intelligence based on Solomonoff's incremental machine learning methods. We use R5RS Scheme and its standard library with a few omissions as the reference machine. We introduce a Levin Search variant based on Stochastic Context Free Grammar together with four synergistic update algorithms that use the same grammar as a guiding probability distribution of programs. The update algorithms include adjusting production probabilities, re-using previous solutions, learning programming idioms and discovery of frequent subprograms. Experiments with two training sequences demonstrate that our approach to incremental learning is effective. ",Teraflop-scale Incremental Machine Learning " In Artificial Intelligence with Coalition Structure Generation (CSG) one refers to those cooperative complex problems that require to find an optimal partition, maximising a social welfare, of a set of entities involved in a system into exhaustive and disjoint coalitions. The solution of the CSG problem finds applications in many fields such as Machine Learning (covering machines, clustering), Data Mining (decision tree, discretization), Graph Theory, Natural Language Processing (aggregation), Semantic Web (service composition), and Bioinformatics. The problem of finding the optimal coalition structure is NP-complete. In this paper we present a greedy adaptive search procedure (GRASP) with path-relinking to efficiently search the space of coalition structures. Experiments and comparisons to other algorithms prove the validity of the proposed method in solving this hard combinatorial problem. ",GRASP and path-relinking for Coalition Structure Generation " Signature used as a biometric is implemented in various systems as well as every signature signed by each person is distinct at the same time. So, it is very important to have a computerized signature verification system. In offline signature verification system dynamic features are not available obviously, but one can use a signature as an image and apply image processing techniques to make an effective offline signature verification system. Author proposes a intelligent network used directional feature and energy density both as inputs to the same network and classifies the signature. Neural network is used as a classifier for this system. The results are compared with both the very basic energy density method and a simple directional feature method of offline signature verification system and this proposed new network is found very effective as compared to the above two methods, specially for less number of training samples, which can be implemented practically. ","A Directional Feature with Energy based Offline Signature Verification Network" " Some recent works in conditional planning have proposed reachability heuristics to improve planner scalability, but many lack a formal description of the properties of their distance estimates. To place previous work in context and extend work on heuristics for conditional planning, we provide a formal basis for distance estimates between belief states. We give a definition for the distance between belief states that relies on aggregating underlying state distance measures. We give several techniques to aggregate state distances and their associated properties. Many existing heuristics exhibit a subset of the properties, but in order to provide a standardized comparison we present several generalizations of planning graph heuristics that are used in a single planner. We compliment our belief state distance estimate framework by also investigating efficient planning graph data structures that incorporate BDDs to compute the most effective heuristics. We developed two planners to serve as test-beds for our investigation. The first, CAltAlt, is a conformant regression planner that uses A* search. The second, POND, is a conditional progression planner that uses AO* search. We show the relative effectiveness of our heuristic techniques within these planners. We also compare the performance of these planners with several state of the art approaches in conditional planning. ",Planning Graph Heuristics for Belief Space Search " Natural Immune system plays a vital role in the survival of the all living being. It provides a mechanism to defend itself from external predates making it consistent systems, capable of adapting itself for survival incase of changes. The human immune system has motivated scientists and engineers for finding powerful information processing algorithms that has solved complex engineering tasks. This paper explores one of the various possibilities for solving problem in a Multiagent scenario wherein multiple robots are deployed to achieve a goal collectively. The final goal is dependent on the performance of individual robot and its survival without having to lose its energy beyond a predetermined threshold value by deploying an evolutionary computational technique otherwise called the artificial immune system that imitates the biological immune system. ","An Artificial Immune System Model for Multi-Agents Resource Sharing in Distributed Environments" " Most machine learning tools work with a single table where each row is an instance and each column is an attribute. Each cell of the table contains an attribute value for an instance. This representation prevents one important form of learning, which is, classification based on groups of correlated records, such as multiple exams of a single patient, internet customer preferences, weather forecast or prediction of sea conditions for a given day. To some extent, relational learning methods, such as inductive logic programming, can capture this correlation through the use of intensional predicates added to the background knowledge. In this work, we propose SPPAM, an algorithm that aggregates past observations in one single record. We show that applying SPPAM to the original correlated data, before the learning task, can produce classifiers that are better than the ones trained using all records. ",SPPAM - Statistical PreProcessing AlgorithM " An emotional version of Sapir-Whorf hypothesis suggests that differences in language emotionalities influence differences among cultures no less than conceptual differences. Conceptual contents of languages and cultures to significant extent are determined by words and their semantic differences; these could be borrowed among languages and exchanged among cultures. Emotional differences, as suggested in the paper, are related to grammar and mostly cannot be borrowed. Conceptual and emotional mechanisms of languages are considered here along with their functions in the mind and cultural evolution. A fundamental contradiction in human mind is considered: language evolution requires reduced emotionality, but ""too low"" emotionality makes language ""irrelevant to life,"" disconnected from sensory-motor experience. Neural mechanisms of these processes are suggested as well as their mathematical models: the knowledge instinct, the language instinct, the dual model connecting language and cognition, dynamic logic, neural modeling fields. Mathematical results are related to cognitive science, linguistics, and psychology. Experimental evidence and theoretical arguments are discussed. Approximate equations for evolution of human minds and cultures are obtained. Their solutions identify three types of cultures: ""conceptual""-pragmatic cultures, in which emotionality of language is reduced and differentiation overtakes synthesis resulting in fast evolution at the price of uncertainty of values, self doubts, and internal crises; ""traditional-emotional"" cultures where differentiation lags behind synthesis, resulting in cultural stability at the price of stagnation; and ""multi-cultural"" societies combining fast cultural evolution and stability. Unsolved problems and future theoretical and experimental directions are discussed. ","Language, Emotions, and Cultures: Emotional Sapir-Whorf Hypothesis" " Knowledge compilation is an approach to tackle the computational intractability of general reasoning problems. According to this approach, knowledge bases are converted off-line into a target compilation language which is tractable for on-line querying. Reduced ordered binary decision diagram (ROBDD) is one of the most influential target languages. We generalize ROBDD by associating some implied literals in each node and the new language is called reduced ordered binary decision diagram with implied literals (ROBDD-L). Then we discuss a kind of subsets of ROBDD-L called ROBDD-i with precisely i implied literals (0 \leq i \leq \infty). In particular, ROBDD-0 is isomorphic to ROBDD; ROBDD-\infty requires that each node should be associated by the implied literals as many as possible. We show that ROBDD-i has uniqueness over some specific variables order, and ROBDD-\infty is the most succinct subset in ROBDD-L and can meet most of the querying requirements involved in the knowledge compilation map. Finally, we propose an ROBDD-i compilation algorithm for any i and a ROBDD-\infty compilation algorithm. Based on them, we implement a ROBDD-L package called BDDjLu and then get some conclusions from preliminary experimental results: ROBDD-\infty is obviously smaller than ROBDD for all benchmarks; ROBDD-\infty is smaller than the d-DNNF the benchmarks whose compilation results are relatively small; it seems that it is better to transform ROBDDs-\infty into FBDDs and ROBDDs rather than straight compile the benchmarks. ","Reduced Ordered Binary Decision Diagram with Implied Literals: A New knowledge Compilation Approach" " CHRONIOUS is an Open, Ubiquitous and Adaptive Chronic Disease Management Platform for Chronic Obstructive Pulmonary Disease(COPD) Chronic Kidney Disease (CKD) and Renal Insufficiency. It consists of several modules: an ontology based literature search engine, a rule based decision support system, remote sensors interacting with lifestyle interfaces (PDA, monitor touchscreen) and a machine learning module. All these modules interact each other to allow the monitoring of two types of chronic diseases and to help clinician in taking decision for cure purpose. This paper illustrates how some machine learning algorithms and a rule based decision support system can be used in smart devices, to monitor chronic patient. We will analyse how a set of machine learning algorithms can be used in smart devices to alert the clinician in case of a patient health condition worsening trend. ","Using Soft Computer Techniques on Smart Devices for Monitoring Chronic Diseases: the CHRONIOUS case" " We show that several important resource allocation problems in wireless networks fit within the common framework of Constraint Satisfaction Problems (CSPs). Inspired by the requirements of these applications, where variables are located at distinct network devices that may not be able to communicate but may interfere, we define natural criteria that a CSP solver must possess in order to be practical. We term these algorithms decentralized CSP solvers. The best known CSP solvers were designed for centralized problems and do not meet these criteria. We introduce a stochastic decentralized CSP solver and prove that it will find a solution in almost surely finite time, should one exist, also showing it has many practically desirable properties. We benchmark the algorithm's performance on a well-studied class of CSPs, random k-SAT, illustrating that the time the algorithm takes to find a satisfying assignment is competitive with stochastic centralized solvers on problems with order a thousand variables despite its decentralized nature. We demonstrate the solver's practical utility for the problems that motivated its introduction by using it to find a non-interfering channel allocation for a network formed from data from downtown Manhattan. ",Decentralized Constraint Satisfaction " this paper presents an enhancement of the medial axis algorithm to be used for finding the optimal shortest path for developed cognitive map. The cognitive map has been developed, based on the architectural blueprint maps. The idea for using the medial-axis is to find main path central pixels; each center pixel represents the center distance between two side boarder pixels. The need for these pixels in the algorithm comes from the need of building a network of nodes for the path, where each node represents a turning in the real world (left, right, critical left, critical right...). The algorithm also ignores from finding the center pixels paths that are too small for intelligent robot navigation. The Idea of this algorithm is to find the possible shortest path between start and end points. The goal of this research is to extract a simple, robust representation of the shape of the cognitive map together with the optimal shortest path between start and end points. The intelligent robot will use this algorithm in order to decrease the time that is needed for sweeping the targeted building. ",Finding Shortest Path for Developed Cognitive Map Using Medial Axis " One of the first step in the realization of an automatic system of check recognition is the extraction of the handwritten area. We propose in this paper an hybrid method to extract these areas. This method is based on digit recognition by Fourier descriptors and different steps of colored image processing . It requires the bank recognition of its code which is located in the check marking band as well as the handwritten color recognition by the method of difference of histograms. The areas extraction is then carried out by the use of some mathematical morphology tools. ","Extraction of handwritten areas from colored image of bank checks by an hybrid method" " Recently, several researchers have found that cost-based satisficing search with A* often runs into problems. Although some ""work arounds"" have been proposed to ameliorate the problem, there has not been any concerted effort to pinpoint its origin. In this paper, we argue that the origins can be traced back to the wide variance in action costs that is observed in most planning domains. We show that such cost variance misleads A* search, and that this is no trifling detail or accidental phenomenon, but a systemic weakness of the very concept of ""cost-based evaluation functions + systematic search + combinatorial graphs"". We show that satisficing search with sized-based evaluation functions is largely immune to this problem. ",Cost Based Satisficing Search Considered Harmful " We propose AllDiffPrecedence, a new global constraint that combines together an AllDifferent constraint with precedence constraints that strictly order given pairs of variables. We identify a number of applications for this global constraint including instruction scheduling and symmetry breaking. We give an efficient propagation algorithm that enforces bounds consistency on this global constraint. We show how to implement this propagator using a decomposition that extends the bounds consistency enforcing decomposition proposed for the AllDifferent constraint. Finally, we prove that enforcing domain consistency on this global constraint is NP-hard in general. ",The AllDifferent Constraint with Precedences " Ontologies and rules are usually loosely coupled in knowledge representation formalisms. In fact, ontologies use open-world reasoning while the leading semantics for rules use non-monotonic, closed-world reasoning. One exception is the tightly-coupled framework of Minimal Knowledge and Negation as Failure (MKNF), which allows statements about individuals to be jointly derived via entailment from an ontology and inferences from rules. Nonetheless, the practical usefulness of MKNF has not always been clear, although recent work has formalized a general resolution-based method for querying MKNF when rules are taken to have the well-founded semantics, and the ontology is modeled by a general oracle. That work leaves open what algorithms should be used to relate the entailments of the ontology and the inferences of rules. In this paper we provide such algorithms, and describe the implementation of a query-driven system, CDF-Rules, for hybrid knowledge bases combining both (non-monotonic) rules under the well-founded semantics and a (monotonic) ontology, represented by a CDF Type-1 (ALQ) theory. To appear in Theory and Practice of Logic Programming (TPLP) ","A Goal-Directed Implementation of Query Answering for Hybrid MKNF Knowledge Bases" " BoolVar/PB is an open source java library dedicated to the translation of pseudo-Boolean constraints into CNF formulae. Input constraints can be categorized with tags. Several encoding schemes are implemented in a way that each input constraint can be translated using one or several encoders, according to the related tags. The library can be easily extended by adding new encoders and / or new output formats. ","BoolVar/PB v1.0, a java library for translating pseudo-Boolean constraints into CNF formulae" " In the present paper, we try to propose a self-similar network theory for the basic understanding. By extending the natural languages to a kind of so called idealy sufficient language, we can proceed a few steps to the investigation of the language searching and the language understanding of AI. Image understanding, and the familiarity of the brain to the surrounding environment are also discussed. Group effects are discussed by addressing the essense of the power of influences, and constructing the influence network of a society. We also give a discussion of inspirations. ",On Understanding and Machine Understanding " Phase transitions in many complex combinational problems have been widely studied in the past decade. In this paper, we investigate phase transitions in the knowledge compilation empirically, where DFA, OBDD and d-DNNF are chosen as the target languages to compile random k-SAT instances. We perform intensive experiments to analyze the sizes of compilation results and draw the following conclusions: there exists an easy-hard-easy pattern in compilations; the peak point of sizes in the pattern is only related to the ratio of the number of clauses to that of variables when k is fixed, regardless of target languages; most sizes of compilation results increase exponentially with the number of variables growing, but there also exists a phase transition that separates a polynomial-increment region from the exponential-increment region; Moreover, we explain why the phase transition in compilations occurs by analyzing microstructures of DFAs, and conclude that a kind of solution interchangeability with more than 2 variables has a sharp transition near the peak point of the easy-hard-easy pattern, and thus it has a great impact on sizes of DFAs. ",Phase Transitions in Knowledge Compilation: an Experimental Study " A definition of intelligence is given in terms of performance that can be quantitatively measured. In this study, we have presented a conceptual model of Intelligent Agent System for Automatic Vehicle Checking Agent (VCA). To achieve this goal, we have introduced several kinds of agents that exhibit intelligent features. These are the Management agent, internal agent, External Agent, Watcher agent and Report agent. Metrics and measurements are suggested for evaluating the performance of Automatic Vehicle Checking Agent (VCA). Calibrate data and test facilities are suggested to facilitate the development of intelligent systems. ",Automatic Vehicle Checking Agent (VCA) " This paper presents the design and development of a proposed rule based Decision Support System that will help students in selecting the best suitable faculty/major decision while taking admission in Gomal University, Dera Ismail Khan, Pakistan. The basic idea of our approach is to design a model for testing and measuring the student capabilities like intelligence, understanding, comprehension, mathematical concepts plus his/her past academic record plus his/her intelligence level, and applying the module results to a rule-based decision support system to determine the compatibility of those capabilities with the available faculties/majors in Gomal University. The result is shown as a list of suggested faculties/majors with the student capabilities and abilities. ","A Proposed Decision Support System/Expert System for Guiding Fresh Students in Selecting a Faculty in Gomal University, Pakistan" " Heuristics are crucial tools in decreasing search effort in varied fields of AI. In order to be effective, a heuristic must be efficient to compute, as well as provide useful information to the search algorithm. However, some well-known heuristics which do well in reducing backtracking are so heavy that the gain of deploying them in a search algorithm might be outweighed by their overhead. We propose a rational metareasoning approach to decide when to deploy heuristics, using CSP backtracking search as a case study. In particular, a value of information approach is taken to adaptive deployment of solution-count estimation heuristics for value ordering. Empirical results show that indeed the proposed mechanism successfully balances the tradeoff between decreasing backtracking and heuristic computational overhead, resulting in a significant overall search time reduction. ",Rational Deployment of CSP Heuristics " Regularization is a well studied problem in the context of neural networks. It is usually used to improve the generalization performance when the number of input samples is relatively small or heavily contaminated with noise. The regularization of a parametric model can be achieved in different manners some of which are early stopping (Morgan and Bourlard, 1990), weight decay, output smoothing that are used to avoid overfitting during the training of the considered model. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters (Krogh and Hertz, 1991). Using Bishop's approximation (Bishop, 1995) of the objective function when a restricted type of noise is added to the input of a parametric function, we derive the higher order terms of the Taylor expansion and analyze the coefficients of the regularization terms induced by the noisy input. In particular we study the effect of penalizing the Hessian of the mapping function with respect to the input in terms of generalization performance. We also show how we can control independently this coefficient by explicitly penalizing the Jacobian of the mapping function on corrupted inputs. ","Adding noise to the input of a model trained with a regularized objective" " We solve constraint satisfaction problems through translation to answer set programming (ASP). Our reformulations have the property that unit-propagation in the ASP solver achieves well defined local consistency properties like arc, bound and range consistency. Experiments demonstrate the computational value of this approach. ",Translation-based Constraint Answer Set Solving " Recent papers address the issue of updating the instance level of knowledge bases expressed in Description Logic following a model-based approach. One of the outcomes of these papers is that the result of updating a knowledge base K is generally not expressible in the Description Logic used to express K. In this paper we introduce a formula-based approach to this problem, by revisiting some research work on formula-based updates developed in the '80s, in particular the WIDTIO (When In Doubt, Throw It Out) approach. We show that our operator enjoys desirable properties, including that both insertions and deletions according to such operator can be expressed in the DL used for the original KB. Also, we present polynomial time algorithms for the evolution of the instance level knowledge bases expressed in the most expressive Description Logics of the DL-lite family. ",On the evolution of the instance level of DL-lite knowledge bases " We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pre-training. ",Learning invariant features through local space contraction " The study of arguments as abstract entities and their interaction as introduced by Dung (Artificial Intelligence 177, 1995) has become one of the most active research branches within Artificial Intelligence and Reasoning. A main issue for abstract argumentation systems is the selection of acceptable sets of arguments. Value-based argumentation, as introduced by Bench-Capon (J. Logic Comput. 13, 2003), extends Dung's framework. It takes into account the relative strength of arguments with respect to some ranking representing an audience: an argument is subjectively accepted if it is accepted with respect to some audience, it is objectively accepted if it is accepted with respect to all audiences. Deciding whether an argument is subjectively or objectively accepted, respectively, are computationally intractable problems. In fact, the problems remain intractable under structural restrictions that render the main computational problems for non-value-based argumentation systems tractable. In this paper we identify nontrivial classes of value-based argumentation systems for which the acceptance problems are polynomial-time tractable. The classes are defined by means of structural restrictions in terms of the underlying graphical structure of the value-based system. Furthermore we show that the acceptance problems are intractable for two classes of value-based systems that where conjectured to be tractable by Dunne (Artificial Intelligence 171, 2007). ",Algorithms and Complexity Results for Persuasive Argumentation " In this paper, we investigate the hybrid tractability of binary Quantified Constraint Satisfaction Problems (QCSPs). First, a basic tractable class of binary QCSPs is identified by using the broken-triangle property. In this class, the variable ordering for the broken-triangle property must be same as that in the prefix of the QCSP. Second, we break this restriction to allow that existentially quantified variables can be shifted within or out of their blocks, and thus identify some novel tractable classes by introducing the broken-angle property. Finally, we identify a more generalized tractable class, i.e., the min-of-max extendable class for QCSPs. ","Hybrid Tractable Classes of Binary Quantified Constraint Satisfaction Problems" " Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In such cases, the goal should be to generate plans that are robust with respect to any known incompleteness of the domain. In this paper, we first introduce annotations expressing the knowledge of the domain incompleteness, and formalize the notion of plan robustness with respect to an incomplete domain model. We then propose an approach to compiling the problem of finding robust plans to the conformant probabilistic planning problem. We present experimental results with Probabilistic-FF, a state-of-the-art planner, showing the promise of our approach. ",Synthesizing Robust Plans under Incomplete Domain Models " Over the years, nonmonotonic rules have proven to be a very expressive and useful knowledge representation paradigm. They have recently been used to complement the expressive power of Description Logics (DLs), leading to the study of integrative formal frameworks, generally referred to as hybrid knowledge bases, where both DL axioms and rules can be used to represent knowledge. The need to use these hybrid knowledge bases in dynamic domains has called for the development of update operators, which, given the substantially different way Description Logics and rules are usually updated, has turned out to be an extremely difficult task. In [SL10], a first step towards addressing this problem was taken, and an update operator for hybrid knowledge bases was proposed. Despite its significance -- not only for being the first update operator for hybrid knowledge bases in the literature, but also because it has some applications - this operator was defined for a restricted class of problems where only the ABox was allowed to change, which considerably diminished its applicability. Many applications that use hybrid knowledge bases in dynamic scenarios require both DL axioms and rules to be updated. In this paper, motivated by real world applications, we introduce an update operator for a large class of hybrid knowledge bases where both the DL component as well as the rule component are allowed to dynamically change. We introduce splitting sequences and splitting theorem for hybrid knowledge bases, use them to define a modular update semantics, investigate its basic properties, and illustrate its use on a realistic example about cargo imports. ",Splitting and Updating Hybrid Knowledge Bases (Extended Version) " A fundamental task for propositional logic is to compute models of propositional formulas. Programs developed for this task are called satisfiability solvers. We show that transition systems introduced by Nieuwenhuis, Oliveras, and Tinelli to model and analyze satisfiability solvers can be adapted for solvers developed for two other propositional formalisms: logic programming under the answer-set semantics, and the logic PC(ID). We show that in each case the task of computing models can be seen as ""satisfiability modulo answer-set programming,"" where the goal is to find a model of a theory that also is an answer set of a certain program. The unifying perspective we develop shows, in particular, that solvers CLASP and MINISATID are closely related despite being developed for different formalisms, one for answer-set programming and the latter for the logic PC(ID). ",Transition Systems for Model Generators - A Unifying Approach " This paper describes a graph clustering algorithm that aims to minimize the normalized cut criterion and has a model order selection procedure. The performance of the proposed algorithm is comparable to spectral approaches in terms of minimizing normalized cut. However, unlike spectral approaches, the proposed algorithm scales to graphs with millions of nodes and edges. The algorithm consists of three components that are processed sequentially: a greedy agglomerative hierarchical clustering procedure, model order selection, and a local refinement. For a graph of n nodes and O(n) edges, the computational complexity of the algorithm is O(n log^2 n), a major improvement over the O(n^3) complexity of spectral methods. Experiments are performed on real and synthetic networks to demonstrate the scalability of the proposed approach, the effectiveness of the model order selection procedure, and the performance of the proposed algorithm in terms of minimizing the normalized cut metric. ",GANC: Greedy Agglomerative Normalized Cut " Machine-part cell formation is used in cellular manufacturing in order to process a large variety, quality, lower work in process levels, reducing manufacturing lead-time and customer response time while retaining flexibility for new products. This paper presents a new and novel approach for obtaining machine cells and part families. In the cellular manufacturing the fundamental problem is the formation of part families and machine cells. The present paper deals with the Self Organising Map (SOM) method an unsupervised learning algorithm in Artificial Intelligence, and has been used as a visually decipherable clustering tool of machine-part cell formation. The objective of the paper is to cluster the binary machine-part matrix through visually decipherable cluster of SOM color-coding and labelling via the SOM map nodes in such a way that the part families are processed in that machine cells. The Umatrix, component plane, principal component projection, scatter plot and histogram of SOM have been reported in the present work for the successful visualization of the machine-part cell formation. Computational result with the proposed algorithm on a set of group technology problems available in the literature is also presented. The proposed SOM approach produced solutions with a grouping efficacy that is at least as good as any results earlier reported in the literature and improved the grouping efficacy for 70% of the problems and found immensely useful to both industry practitioners and researchers. ","Machine-Part cell formation through visual decipherable clustering of Self Organizing Map" " Rubik's Cube is an easily-understood puzzle, which is originally called the ""magic cube"". It is a well-known planning problem, which has been studied for a long time. Yet many simple properties remain unknown. This paper studies whether modern SAT solvers are applicable to this puzzle. To our best knowledge, we are the first to translate Rubik's Cube to a SAT problem. To reduce the number of variables and clauses needed for the encoding, we replace a naive approach of 6 Boolean variables to represent each color on each facelet with a new approach of 3 or 2 Boolean variables. In order to be able to solve quickly Rubik's Cube, we replace the direct encoding of 18 turns with the layer encoding of 18-subtype turns based on 6-type turns. To speed up the solving further, we encode some properties of two-phase algorithm as an additional constraint, and restrict some move sequences by adding some constraint clauses. Using only efficient encoding cannot solve this puzzle. For this reason, we improve the existing SAT solvers, and develop a new SAT solver based on PrecoSAT, though it is suited only for Rubik's Cube. The new SAT solver replaces the lookahead solving strategy with an ALO (\emph{at-least-one}) solving strategy, and decomposes the original problem into sub-problems. Each sub-problem is solved by PrecoSAT. The empirical results demonstrate both our SAT translation and new solving technique are efficient. Without the efficient SAT encoding and the new solving technique, Rubik's Cube will not be able to be solved still by any SAT solver. Using the improved SAT solver, we can find always a solution of length 20 in a reasonable time. Although our solver is slower than Kociemba's algorithm using lookup tables, but does not require a huge lookup table. ",Solving Rubik's Cube Using SAT Solvers " The World Wide Web no longer consists just of HTML pages. Our work sheds light on a number of trends on the Internet that go beyond simple Web pages. The hidden Web provides a wealth of data in semi-structured form, accessible through Web forms and Web services. These services, as well as numerous other applications on the Web, commonly use XML, the eXtensible Markup Language. XML has become the lingua franca of the Internet that allows customized markups to be defined for specific domains. On top of XML, the Semantic Web grows as a common structured data source. In this work, we first explain each of these developments in detail. Using real-world examples from scientific domains of great interest today, we then demonstrate how these new developments can assist the managing, harvesting, and organization of data on the Web. On the way, we also illustrate the current research avenues in these domains. We believe that this effort would help bridge multiple database tracks, thereby attracting researchers with a view to extend database technology. ","The Hidden Web, XML and Semantic Web: A Scientific Data Management Perspective" " Developing smart house systems has been a great challenge for researchers and engineers in this area because of the high cost of implementation and evaluation process of these systems, while being very time consuming. Testing a designed smart house before actually building it is considered as an obstacle towards an efficient smart house project. This is because of the variety of sensors, home appliances and devices available for a real smart environment. In this paper, we present the design and implementation of a multi-purpose smart house simulation system for designing and simulating all aspects of a smart house environment. This simulator provides the ability to design the house plan and different virtual sensors and appliances in a two dimensional model of the virtual house environment. This simulator can connect to any external smart house remote controlling system, providing evaluation capabilities to their system much easier than before. It also supports detailed adding of new emerging sensors and devices to help maintain its compatibility with future simulation needs. Scenarios can also be defined for testing various possible combinations of device states; so different criteria and variables can be simply evaluated without the need of experimenting on a real environment. ",A Multi-Purpose Scenario-based Simulator for Smart House Environments " We introduce the Xapagy cognitive architecture: a software system designed to perform narrative reasoning. The architecture has been designed from scratch to model and mimic the activities performed by humans when witnessing, reading, recalling, narrating and talking about stories. ",Xapagy: a cognitive architecture for narrative reasoning " This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications. ","Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians" " Decision-theoretic agents predict and evaluate the results of their actions using a model, or ontology, of their environment. An agent's goal, or utility function, may also be specified in terms of the states of, or entities within, its ontology. If the agent may upgrade or replace its ontology, it faces a crisis: the agent's original goal may not be well-defined with respect to its new ontology. This crisis must be resolved before the agent can make plans towards achieving its goals. We discuss in this paper which sorts of agents will undergo ontological crises and why we may want to create such agents. We present some concrete examples, and argue that a well-defined procedure for resolving ontological crises is needed. We point to some possible approaches to solving this problem, and evaluate these methods on our examples. ",Ontological Crises in Artificial Agents' Value Systems " A knowledge system S describing a part of real world does in general not contain complete information. Reasoning with incomplete information is prone to errors since any belief derived from S may be false in the present state of the world. A false belief may suggest wrong decisions and lead to harmful actions. So an important goal is to make false beliefs as unlikely as possible. This work introduces the notions of ""typical atoms"" and ""typical models"", and shows that reasoning with typical models minimizes the expected number of false beliefs over all ways of using incomplete information. Various properties of typical models are studied, in particular, correctness and stability of beliefs suggested by typical models, and their connection to oblivious reasoning. ",Typical models: minimizing false beliefs " We present a new approach to path planning, called the ""Ariadne's clew algorithm"". It is designed to find paths in high-dimensional continuous spaces and applies to robots with many degrees of freedom in static, as well as dynamic environments - ones where obstacles may move. The Ariadne's clew algorithm comprises two sub-algorithms, called Search and Explore, applied in an interleaved manner. Explore builds a representation of the accessible space while Search looks for the target. Both are posed as optimization problems. We describe a real implementation of the algorithm to plan paths for a six degrees of freedom arm in a dynamic environment where another six degrees of freedom arm is used as a moving obstacle. Experimental results show that a path is found in about one second without any pre-processing. ",The Ariadne's Clew Algorithm " This article studies the problem of modifying the action ordering of a plan in order to optimise the plan according to various criteria. One of these criteria is to make a plan less constrained and the other is to minimize its parallel execution time. Three candidate definitions are proposed for the first of these criteria, constituting a sequence of increasing optimality guarantees. Two of these are based on deordering plans, which means that ordering relations may only be removed, not added, while the third one uses reordering, where arbitrary modifications to the ordering are allowed. It is shown that only the weakest one of the three criteria is tractable to achieve, the other two being NP-hard and even difficult to approximate. Similarly, optimising the parallel execution time of a plan is studied both for deordering and reordering of plans. In the general case, both of these computations are NP-hard. However, it is shown that optimal deorderings can be computed in polynomial time for a class of planning languages based on the notions of producers, consumers and threats, which includes most of the commonly used planning languages. Computing optimal reorderings can potentially lead to even faster parallel executions, but this problem remains NP-hard and difficult to approximate even under quite severe restrictions. ",Computational Aspects of Reordering Plans " It is common to view programs as a combination of logic and control: the logic part defines what the program must do, the control part -- how to do it. The Logic Programming paradigm was developed with the intention of separating the logic from the control. Recently, extensive research has been conducted on automatic generation of control for logic programs. Only a few of these works considered the issue of automatic generation of control for improving the efficiency of logic programs. In this paper we present a novel algorithm for automatic finding of lowest-cost subgoal orderings. The algorithm works using the divide-and-conquer strategy. The given set of subgoals is partitioned into smaller sets, based on co-occurrence of free variables. The subsets are ordered recursively and merged, yielding a provably optimal order. We experimentally demonstrate the utility of the algorithm by testing it in several domains, and discuss the possibilities of its cooperation with other existing methods. ","The Divide-and-Conquer Subgoal-Ordering Algorithm for Speeding up Logic Inference" " Using an improved backtrack algorithm with sophisticated pruning techniques, we revise previous observations correlating a high frequency of hard to solve Hamiltonian Cycle instances with the Gn,m phase transition between Hamiltonicity and non-Hamiltonicity. Instead all tested graphs of 100 to 1500 vertices are easily solved. When we artificially restrict the degree sequence with a bounded maximum degree, although there is some increase in difficulty, the frequency of hard graphs is still low. When we consider more regular graphs based on a generalization of knight's tours, we observe frequent instances of really hard graphs, but on these the average degree is bounded by a constant. We design a set of graphs with a feature our algorithm is unable to detect and so are very hard for our algorithm, but in these we can vary the average degree from O(1) to O(n). We have so far found no class of graphs correlated with the Gn,m phase transition which asymptotically produces a high frequency of hard instances. ","The Gn,m Phase Transition is Not Hard for the Hamiltonian Cycle Problem" " This article presents a measure of semantic similarity in an IS-A taxonomy based on the notion of shared information content. Experimental evaluation against a benchmark set of human similarity judgments demonstrates that the measure performs better than the traditional edge-counting approach. The article presents algorithms that take advantage of taxonomic similarity in resolving syntactic and semantic ambiguity, along with experimental results demonstrating their effectiveness. ","Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language" " A class of interval-based temporal languages for uniformly representing and reasoning about actions and plans is presented. Actions are represented by describing what is true while the action itself is occurring, and plans are constructed by temporally relating actions and world states. The temporal languages are members of the family of Description Logics, which are characterized by high expressivity combined with good computational properties. The subsumption problem for a class of temporal Description Logics is investigated and sound and complete decision procedures are given. The basic language TL-F is considered first: it is the composition of a temporal logic TL -- able to express interval temporal networks -- together with the non-temporal logic F -- a Feature Description Logic. It is proven that subsumption in this language is an NP-complete problem. Then it is shown how to reason with the more expressive languages TLU-FU and TL-ALCF. The former adds disjunction both at the temporal and non-temporal sides of the language, the latter extends the non-temporal side with set-valued features (i.e., roles) and a propositionally complete language. ",A Temporal Description Logic for Reasoning about Actions and Plans " Many of the artificial intelligence techniques developed to date rely on heuristic search through large spaces. Unfortunately, the size of these spaces and the corresponding computational effort reduce the applicability of otherwise novel and effective algorithms. A number of parallel and distributed approaches to search have considerably improved the performance of the search process. Our goal is to develop an architecture that automatically selects parallel search strategies for optimal performance on a variety of search problems. In this paper we describe one such architecture realized in the Eureka system, which combines the benefits of many different approaches to parallel heuristic search. Through empirical and theoretical analyses we observe that features of the problem space directly affect the choice of optimal parallel search strategy. We then employ machine learning techniques to select the optimal parallel search strategy for a given problem space. When a new search task is input to the system, Eureka uses features describing the search space and the chosen architecture to automatically select the appropriate search strategy. Eureka has been tested on a MIMD parallel processor, a distributed network of workstations, and a single workstation using multithreading. Results generated from fifteen puzzle problems, robot arm motion problems, artificial search spaces, and planning problems indicate that Eureka outperforms any of the tested strategies used exclusively for all problem instances and is able to greatly reduce the search time for these applications. ",Adaptive Parallel Iterative Deepening Search " Order of magnitude reasoning - reasoning by rough comparisons of the sizes of quantities - is often called 'back of the envelope calculation', with the implication that the calculations are quick though approximate. This paper exhibits an interesting class of constraint sets in which order of magnitude reasoning is demonstrably fast. Specifically, we present a polynomial-time algorithm that can solve a set of constraints of the form 'Points a and b are much closer together than points c and d.' We prove that this algorithm can be applied if `much closer together' is interpreted either as referring to an infinite difference in scale or as referring to a finite difference in scale, as long as the difference in scale is greater than the number of variables in the constraint set. We also prove that the first-order theory over such constraints is decidable. ",Order of Magnitude Comparisons of Distance " This paper introduces AntNet, a novel approach to the adaptive learning of routing tables in communications networks. AntNet is a distributed, mobile agents based Monte Carlo system that was inspired by recent work on the ant colony metaphor for solving optimization problems. AntNet's agents concurrently explore the network and exchange collected information. The communication among the agents is indirect and asynchronous, mediated by the network itself. This form of communication is typical of social insects and is called stigmergy. We compare our algorithm with six state-of-the-art routing algorithms coming from the telecommunications and machine learning fields. The algorithms' performance is evaluated over a set of realistic testbeds. We run many experiments over real and artificial IP datagram networks with increasing number of nodes and under several paradigmatic spatial and temporal traffic distributions. Results are very encouraging. AntNet showed superior performance under all the experimental conditions with respect to its competitors. We analyze the main characteristics of the algorithm and try to explain the reasons for its superiority. ",AntNet: Distributed Stigmergetic Control for Communications Networks " Cox's well-known theorem justifying the use of probability is shown not to hold in finite domains. The counterexample also suggests that Cox's assumptions are insufficient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative conditional probability. ",A Counter Example to Theorems of Cox and Fine " As planning is applied to larger and richer domains the effort involved in constructing domain descriptions increases and becomes a significant burden on the human application designer. If general planners are to be applied successfully to large and complex domains it is necessary to provide the domain designer with some assistance in building correctly encoded domains. One way of doing this is to provide domain-independent techniques for extracting, from a domain description, knowledge that is implicit in that description and that can assist domain designers in debugging domain descriptions. This knowledge can also be exploited to improve the performance of planners: several researchers have explored the potential of state invariants in speeding up the performance of domain-independent planners. In this paper we describe a process by which state invariants can be extracted from the automatically inferred type structure of a domain. These techniques are being developed for exploitation by STAN, a Graphplan based planner that employs state analysis techniques to enhance its performance. ",The Automatic Inference of State Invariants in TIM " The notion of class is ubiquitous in computer science and is central in many formalisms for the representation of structured knowledge used both in knowledge representation and in databases. In this paper we study the basic issues underlying such representation formalisms and single out both their common characteristics and their distinguishing features. Such investigation leads us to propose a unifying framework in which we are able to capture the fundamental aspects of several representation languages used in different contexts. The proposed formalism is expressed in the style of description logics, which have been introduced in knowledge representation as a means to provide a semantically well-founded basis for the structural aspects of knowledge representation systems. The description logic considered in this paper is a subset of first order logic with nice computational characteristics. It is quite expressive and features a novel combination of constructs that has not been studied before. The distinguishing constructs are number restrictions, which generalize existence and functional dependencies, inverse roles, which allow one to refer to the inverse of a relationship, and possibly cyclic assertions, which are necessary for capturing real world domains. We are able to show that it is precisely such combination of constructs that makes our logic powerful enough to model the essential set of features for defining class structures that are common to frame systems, object-oriented database languages, and semantic data models. As a consequence of the established correspondences, several significant extensions of each of the above formalisms become available. The high expressiveness of the logic we propose and the need for capturing the reasoning in different contexts forces us to distinguish between unrestricted and finite model reasoning. A notable feature of our proposal is that reasoning in both cases is decidable. We argue that, by virtue of the high expressive power and of the associated reasoning capabilities on both unrestricted and finite models, our logic provides a common core for class-based representation formalisms. ",Unifying Class-Based Representation Formalisms " In default reasoning, usually not all possible ways of resolving conflicts between default rules are acceptable. Criteria expressing acceptable ways of resolving the conflicts may be hardwired in the inference mechanism, for example specificity in inheritance reasoning can be handled this way, or they may be given abstractly as an ordering on the default rules. In this article we investigate formalizations of the latter approach in Reiter's default logic. Our goal is to analyze and compare the computational properties of three such formalizations in terms of their computational complexity: the prioritized default logics of Baader and Hollunder, and Brewka, and a prioritized default logic that is based on lexicographic comparison. The analysis locates the propositional variants of these logics on the second and third levels of the polynomial hierarchy, and identifies the boundary between tractable and intractable inference for restricted classes of prioritized default theories. ",Complexity of Prioritized Default Logics " We describe a general approach to optimization which we term `Squeaky Wheel' Optimization (SWO). In SWO, a greedy algorithm is used to construct a solution which is then analyzed to find the trouble spots, i.e., those elements, that, if improved, are likely to improve the objective function score. The results of the analysis are used to generate new priorities that determine the order in which the greedy algorithm constructs the next solution. This Construct/Analyze/Prioritize cycle continues until some limit is reached, or an acceptable solution is found. SWO can be viewed as operating on two search spaces: solutions and prioritizations. Successive solutions are only indirectly related, via the re-prioritization that results from analyzing the prior solution. Similarly, successive prioritizations are generated by constructing and analyzing solutions. This `coupled search' has some interesting properties, which we discuss. We report encouraging experimental results on two domains, scheduling problems that arise in fiber-optic cable manufacturing, and graph coloring problems. The fact that these domains are very different supports our claim that SWO is a general technique for optimization. ",Squeaky Wheel Optimization " Intractable distributions present a common difficulty in inference within the probabilistic knowledge representation framework and variational methods have recently been popular in providing an approximate solution. In this article, we describe a perturbational approach in the form of a cumulant expansion which, to lowest order, recovers the standard Kullback-Leibler variational bound. Higher-order terms describe corrections on the variational approach without incurring much further computational cost. The relationship to other perturbational approaches such as TAP is also elucidated. We demonstrate the method on a particular class of undirected graphical models, Boltzmann machines, for which our simulation results confirm improved accuracy and enhanced stability during learning. ",Variational Cumulant Expansions for Intractable Distributions " STAN is a Graphplan-based planner, so-called because it uses a variety of STate ANalysis techniques to enhance its performance. STAN competed in the AIPS-98 planning competition where it compared well with the other competitors in terms of speed, finding solutions fastest to many of the problems posed. Although the domain analysis techniques STAN exploits are an important factor in its overall performance, we believe that the speed at which STAN solved the competition problems is largely due to the implementation of its plan graph. The implementation is based on two insights: that many of the graph construction operations can be implemented as bit-level logical operations on bit vectors, and that the graph should not be explicitly constructed beyond the fix point. This paper describes the implementation of STAN's plan graph and provides experimental results which demonstrate the circumstances under which advantages can be obtained from using this implementation. ",Efficient Implementation of the Plan Graph in STAN " Top-down and bottom-up theorem proving approaches each have specific advantages and disadvantages. Bottom-up provers profit from strong redundancy control but suffer from the lack of goal-orientation, whereas top-down provers are goal-oriented but often have weak calculi when their proof lengths are considered. In order to integrate both approaches, we try to achieve cooperation between a top-down and a bottom-up prover in two different ways: The first technique aims at supporting a bottom-up with a top-down prover. A top-down prover generates subgoal clauses, they are then processed by a bottom-up prover. The second technique deals with the use of bottom-up generated lemmas in a top-down prover. We apply our concept to the areas of model elimination and superposition. We discuss the ability of our techniques to shorten proofs as well as to reorder the search space in an appropriate manner. Furthermore, in order to identify subgoal clauses and lemmas which are actually relevant for the proof task, we develop methods for a relevancy-based filtering. Experiments with the provers SETHEO and SPASS performed in the problem library TPTP reveal the high potential of our cooperation approaches. ",Cooperation between Top-Down and Bottom-Up Theorem Provers " A previously developed quantum search algorithm for solving 1-SAT problems in a single step is generalized to apply to a range of highly constrained k-SAT problems. We identify a bound on the number of clauses in satisfiability problems for which the generalized algorithm can find a solution in a constant number of steps as the number of variables increases. This performance contrasts with the linear growth in the number of steps required by the best classical algorithms, and the exponential number required by classical and quantum methods that ignore the problem structure. In some cases, the algorithm can also guarantee that insoluble problems in fact have no solutions, unlike previously proposed quantum search algorithms. ",Solving Highly Constrained Search Problems with Quantum Computers " Planning under uncertainty is a central problem in the study of automated sequential decision making, and has been addressed by researchers in many different fields, including AI planning, decision analysis, operations research, control theory and economics. While the assumptions and perspectives adopted in these areas often differ in substantial ways, many planning problems of interest to researchers in these fields can be modeled as Markov decision processes (MDPs) and analyzed using the techniques of decision theory. This paper presents an overview and synthesis of MDP-related methods, showing how they provide a unifying framework for modeling many classes of planning problems studied in AI. It also describes structural properties of MDPs that, when exhibited by particular classes of problems, can be exploited in the construction of optimal or approximately optimal policies or plans. Planning problems commonly possess structure in the reward and value functions used to describe performance criteria, in the functions used to describe state transitions and observations, and in the relationships among features used to describe states, actions, rewards, and observations. Specialized representations, and algorithms employing these representations, can achieve computational leverage by exploiting these various forms of structure. Certain AI techniques -- in particular those based on the use of structured, intensional representations -- can be viewed in this way. This paper surveys several types of representations for both classical and decision-theoretic planning problems, and planning algorithms that exploit these representations in a number of different ways to ease the computational burden of constructing policies or plans. It focuses primarily on abstraction, aggregation and decomposition techniques based on AI-style representations. ","Decision-Theoretic Planning: Structural Assumptions and Computational Leverage" " We study the problem of probabilistic deduction with conditional constraints over basic events. We show that globally complete probabilistic deduction with conditional constraints over basic events is NP-hard. We then concentrate on the special case of probabilistic deduction in conditional constraint trees. We elaborate very efficient techniques for globally complete probabilistic deduction. In detail, for conditional constraint trees with point probabilities, we present a local approach to globally complete probabilistic deduction, which runs in linear time in the size of the conditional constraint trees. For conditional constraint trees with interval probabilities, we show that globally complete probabilistic deduction can be done in a global approach by solving nonlinear programs. We show how these nonlinear programs can be transformed into equivalent linear programs, which are solvable in polynomial time in the size of the conditional constraint trees. ",Probabilistic Deduction with Conditional Constraints over Basic Events " We describe a variational approximation method for efficient inference in large-scale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods based on stochastic sampling or search. We describe a variational approach to the problem of diagnostic inference in the `Quick Medical Reference' (QMR) network. The QMR network is a large-scale probabilistic graphical model built on statistical and expert knowledge. Exact probabilistic inference is infeasible in this model for all but a small set of cases. We evaluate our variational inference algorithm on a large set of diagnostic test cases, comparing the algorithm to a state-of-the-art stochastic sampling method. ",Variational Probabilistic Inference and the QMR-DT Network " This paper offers an approach to extensible knowledge representation and reasoning for a family of formalisms known as Description Logics. The approach is based on the notion of adding new concept constructors, and includes a heuristic methodology for specifying the desired extensions, as well as a modularized software architecture that supports implementing extensions. The architecture detailed here falls in the normalize-compared paradigm, and supports both intentional reasoning (subsumption) involving concepts, and extensional reasoning involving individuals after incremental updates to the knowledge base. The resulting approach can be used to extend the reasoner with specialized notions that are motivated by specific problems or application areas, such as reasoning about dates, plans, etc. In addition, it provides an opportunity to implement constructors that are not currently yet sufficiently well understood theoretically, but are needed in practice. Also, for constructors that are provably hard to reason with (e.g., ones whose presence would lead to undecidability), it allows the implementation of incomplete reasoners where the incompleteness is tailored to be acceptable for the application at hand. ",Extensible Knowledge Representation: the Case of Description Reasoners " The research on conditional planning rejects the assumptions that there is no uncertainty or incompleteness of knowledge with respect to the state and changes of the system the plans operate on. Without these assumptions the sequences of operations that achieve the goals depend on the initial state and the outcomes of nondeterministic changes in the system. This setting raises the questions of how to represent the plans and how to perform plan search. The answers are quite different from those in the simpler classical framework. In this paper, we approach conditional planning from a new viewpoint that is motivated by the use of satisfiability algorithms in classical planning. Translating conditional planning to formulae in the propositional logic is not feasible because of inherent computational limitations. Instead, we translate conditional planning to quantified Boolean formulae. We discuss three formalizations of conditional planning as quantified Boolean formulae, and present experimental results obtained with a theorem-prover. ",Constructing Conditional Plans by a Theorem-Prover " Stacked generalization is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy. In this paper we address two crucial issues which have been considered to be a `black art' in classification tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input. We find that best results are obtained when the higher-level model combines the confidence (and not just the predictions) of the lower-level ones. We demonstrate the effectiveness of stacked generalization for combining three different types of learning algorithms for classification tasks. We also compare the performance of stacked generalization with majority vote and published results of arcing and bagging. ",Issues in Stacked Generalization " We present PARIS, an approach for the automatic alignment of ontologies. PARIS aligns not only instances, but also relations and classes. Alignments at the instance-level cross-fertilize with alignments at the schema-level. Thereby, our system provides a truly holistic solution to the problem of ontology alignment. The heart of the approach is probabilistic. This allows PARIS to run without any parameter tuning. We demonstrate the efficiency of the algorithm and its precision through extensive experiments. In particular, we obtain a precision of around 90% in experiments with two of the world's largest ontologies. ",Ontology Alignment at the Instance and Schema Level " We prove that it is NP-hard for a coalition of two manipulators to compute how to manipulate the Borda voting rule. This resolves one of the last open problems in the computational complexity of manipulating common voting rules. Because of this NP-hardness, we treat computing a manipulation as an approximation problem where we try to minimize the number of manipulators. Based on ideas from bin packing and multiprocessor scheduling, we propose two new approximation methods to compute manipulations of the Borda rule. Experiments show that these methods significantly outperform the previous best known %existing approximation method. We are able to find optimal manipulations in almost all the randomly generated elections tested. Our results suggest that, whilst computing a manipulation of the Borda rule by a coalition is NP-hard, computational complexity may provide only a weak barrier against manipulation in practice. ",Complexity of and Algorithms for Borda Manipulation " We introduce a temporal model for reasoning on disjunctive metric constraints on intervals and time points in temporal contexts. This temporal model is composed of a labeled temporal algebra and its reasoning algorithms. The labeled temporal algebra defines labeled disjunctive metric point-based constraints, where each disjunct in each input disjunctive constraint is univocally associated to a label. Reasoning algorithms manage labeled constraints, associated label lists, and sets of mutually inconsistent disjuncts. These algorithms guarantee consistency and obtain a minimal network. Additionally, constraints can be organized in a hierarchy of alternative temporal contexts. Therefore, we can reason on context-dependent disjunctive metric constraints on intervals and points. Moreover, the model is able to represent non-binary constraints, such that logical dependencies on disjuncts in constraints can be handled. The computational cost of reasoning algorithms is exponential in accordance with the underlying problem complexity, although some improvements are proposed. ","Reasoning on Interval and Point-based Disjunctive Metric Constraints in Temporal Contexts" " Negation as failure and incomplete information in logic programs have been studied by many researchers In order to explains HOW a negated conclusion was reached, we introduce and proof a different way for negating facts to overcoming misleads in logic programs. Negating facts can be achieved by asking the user for constants that do not appear elsewhere in the knowledge base. ",Overcoming Misleads In Logic Programs by Redefining Negation " Despite the prevalence of the Computational Theory of Mind and the Connectionist Model, the establishing of the key principles of the Cognitive Science are still controversy and inconclusive. This paper proposes the concept of Pattern Recognition as Necessary and Sufficient Principle for a general cognitive science modeling, in a very ambitious scientific proposal. A formal physical definition of the pattern recognition concept is also proposed to solve many key conceptual gaps on the field. ","Proposal of Pattern Recognition as a necessary and sufficient Principle to Cognitive Science" " As was shown recently, many important AI problems require counting the number of models of propositional formulas. The problem of counting models of such formulas is, according to present knowledge, computationally intractable in a worst case. Based on the Davis-Putnam procedure, we present an algorithm, CDP, that computes the exact number of models of a propositional CNF or DNF formula F. Let m and n be the number of clauses and variables of F, respectively, and let p denote the probability that a literal l of F occurs in a clause C of F, then the average running time of CDP is shown to be O(nm^d), where d=-1/log(1-p). The practical performance of CDP has been estimated in a series of experiments on a wide variety of CNF formulas. ",The Good Old Davis-Putnam Procedure Helps Counting Models " This paper presents a new approach to identifying and eliminating mislabeled training instances for supervised learning. The goal of this approach is to improve classification accuracies produced by learning algorithms by improving the quality of the training data. Our approach uses a set of learning algorithms to create classifiers that serve as noise filters for the training data. We evaluate single algorithm, majority vote and consensus filters on five datasets that are prone to labeling errors. Our experiments illustrate that filtering significantly improves classification accuracy for noise levels up to 30 percent. An analytical and empirical evaluation of the precision of our approach shows that consensus filters are conservative at throwing away good data at the expense of retaining bad data and that majority filters are better at detecting bad data at the expense of throwing away good data. This suggests that for situations in which there is a paucity of data, consensus filters are preferable, whereas majority vote filters are preferable for situations with an abundance of data. ",Identifying Mislabeled Training Data " In many real-world learning tasks, it is expensive to acquire a sufficient number of labeled examples for training. This paper investigates methods for reducing annotation cost by `sample selection'. In this approach, during training the learning program examines many unlabeled examples and selects for labeling only those that are most informative at each stage. This avoids redundantly labeling examples that contribute little new information. Our work follows on previous research on Query By Committee, extending the committee-based paradigm to the context of probabilistic classification. We describe a family of empirical methods for committee-based sample selection in probabilistic classification models, which evaluate the informativeness of an example by measuring the degree of disagreement between several model variants. These variants (the committee) are drawn randomly from a probability distribution conditioned by the training set labeled so far. The method was applied to the real-world natural language processing task of stochastic part-of-speech tagging. We find that all variants of the method achieve a significant reduction in annotation cost, although their computational efficiency differs. In particular, the simplest variant, a two member committee with no parameters to tune, gives excellent results. We also show that sample selection yields a significant reduction in the size of the model used by the tagger. ",Committee-Based Sample Selection for Probabilistic Classifiers " We investigate the problem of reasoning in the propositional fragment of MBNF, the logic of minimal belief and negation as failure introduced by Lifschitz, which can be considered as a unifying framework for several nonmonotonic formalisms, including default logic, autoepistemic logic, circumscription, epistemic queries, and logic programming. We characterize the complexity and provide algorithms for reasoning in propositional MBNF. In particular, we show that entailment in propositional MBNF lies at the third level of the polynomial hierarchy, hence it is harder than reasoning in all the above mentioned propositional formalisms for nonmonotonic reasoning. We also prove the exact correspondence between negation as failure in MBNF and negative introspection in Moore's autoepistemic logic. ",Reasoning about Minimal Belief and Negation as Failure " We show how to find a minimum weight loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in the method of conditioning for inference. Our randomized algorithm for finding a loop cutset outputs a minimum loop cutset after O(c 6^k kn) steps with probability at least 1 - (1 - 1/(6^k))^c6^k, where c > 1 is a constant specified by the user, k is the minimal size of a minimum weight loop cutset, and n is the number of vertices. We also show empirically that a variant of this algorithm often finds a loop cutset that is closer to the minimum weight loop cutset than the ones found by the best deterministic algorithms known. ",Randomized Algorithms for the Loop Cutset Problem " Recently model checking representation and search techniques were shown to be efficiently applicable to planning, in particular to non-deterministic planning. Such planning approaches use Ordered Binary Decision Diagrams (OBDDs) to encode a planning domain as a non-deterministic finite automaton and then apply fast algorithms from model checking to search for a solution. OBDDs can effectively scale and can provide universal plans for complex planning domains. We are particularly interested in addressing the complexities arising in non-deterministic, multi-agent domains. In this article, we present UMOP, a new universal OBDD-based planning framework for non-deterministic, multi-agent domains. We introduce a new planning domain description language, NADL, to specify non-deterministic, multi-agent domains. The language contributes the explicit definition of controllable agents and uncontrollable environment agents. We describe the syntax and semantics of NADL and show how to build an efficient OBDD-based representation of an NADL description. The UMOP planning system uses NADL and different OBDD-based universal planning algorithms. It includes the previously developed strong and strong cyclic planning algorithms. In addition, we introduce our new optimistic planning algorithm that relaxes optimality guarantees and generates plausible universal plans in some domains where no strong nor strong cyclic solution exists. We present empirical results applying UMOP to domains ranging from deterministic and single-agent with no environment actions to non-deterministic and multi-agent with complex environment actions. UMOP is shown to be a rich and efficient planning system. ","OBDD-based Universal Planning for Synchronized Agents in Non-Deterministic Domains" " This paper reviews the connections between Graphplan's planning-graph and the dynamic constraint satisfaction problem and motivates the need for adapting CSP search techniques to the Graphplan algorithm. It then describes how explanation based learning, dependency directed backtracking, dynamic variable ordering, forward checking, sticky values and random-restart search strategies can be adapted to Graphplan. Empirical results are provided to demonstrate that these augmentations improve Graphplan's performance significantly (up to 1000x speedups) on several benchmark problems. Special attention is paid to the explanation-based learning and dependency directed backtracking techniques as they are empirically found to be most useful in improving the performance of Graphplan. ","Planning Graph as a (Dynamic) CSP: Exploiting EBL, DDB and other CSP Search Techniques in Graphplan" " We investigate the space efficiency of a Propositional Knowledge Representation (PKR) formalism. Intuitively, the space efficiency of a formalism F in representing a certain piece of knowledge A, is the size of the shortest formula of F that represents A. In this paper we assume that knowledge is either a set of propositional interpretations (models) or a set of propositional formulae (theorems). We provide a formal way of talking about the relative ability of PKR formalisms to compactly represent a set of models or a set of theorems. We introduce two new compactness measures, the corresponding classes, and show that the relative space efficiency of a PKR formalism in representing models/theorems is directly related to such classes. In particular, we consider formalisms for nonmonotonic reasoning, such as circumscription and default logic, as well as belief revision operators and the stable model semantics for logic programs with negation. One interesting result is that formalisms with the same time complexity do not necessarily belong to the same space efficiency class. ",Space Efficiency of Propositional Knowledge Representation Formalisms " Partially observable Markov decision processes (POMDPs) provide an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which states of the system are observable only indirectly, via a set of imperfect or noisy observations. The modeling advantage of POMDPs, however, comes at a price -- exact methods for solving them are computationally very expensive and thus applicable in practice only to very simple problems. We focus on efficient approximation (heuristic) methods that attempt to alleviate the computational problem and trade off accuracy for speed. We have two objectives here. First, we survey various approximation methods, analyze their properties and relations and provide some new insights into their differences. Second, we present a number of new approximation methods and novel refinements of existing techniques. The theoretical results are supported by experiments on a problem from the agent navigation domain. ","Value-Function Approximations for Partially Observable Markov Decision Processes" " Pearl and Dechter (1996) claimed that the d-separation criterion for conditional independence in acyclic causal networks also applies to networks of discrete variables that have feedback cycles, provided that the variables of the system are uniquely determined by the random disturbances. I show by example that this is not true in general. Some condition stronger than uniqueness is needed, such as the existence of a causal dynamics guaranteed to lead to the unique solution. ","On Deducing Conditional Independence from d-Separation in Causal Graphs with Feedback (Research Note)" " Functional relationships between objects, called `attributes', are of considerable importance in knowledge representation languages, including Description Logics (DLs). A study of the literature indicates that papers have made, often implicitly, different assumptions about the nature of attributes: whether they are always required to have a value, or whether they can be partial functions. The work presented here is the first explicit study of this difference for subclasses of the CLASSIC DL, involving the same-as concept constructor. It is shown that although determining subsumption between concept descriptions has the same complexity (though requiring different algorithms), the story is different in the case of determining the least common subsumer (lcs). For attributes interpreted as partial functions, the lcs exists and can be computed relatively easily; even in this case our results correct and extend three previous papers about the lcs of DLs. In the case where attributes must have a value, the lcs may not exist, and even if it exists it may be of exponential size. Interestingly, it is possible to decide in polynomial time if the lcs exists. ",What's in an Attribute? Consequences for the Least Common Subsumer " We study the complexity of the combination of the Description Logics ALCQ and ALCQI with a terminological formalism based on cardinality restrictions on concepts. These combinations can naturally be embedded into C^2, the two variable fragment of predicate logic with counting quantifiers, which yields decidability in NExpTime. We show that this approach leads to an optimal solution for ALCQI, as ALCQI with cardinality restrictions has the same complexity as C^2 (NExpTime-complete). In contrast, we show that for ALCQ, the problem can be solved in ExpTime. This result is obtained by a reduction of reasoning with cardinality restrictions to reasoning with the (in general weaker) terminological formalism of general axioms for ALCQ extended with nominals. Using the same reduction, we show that, for the extension of ALCQI with nominals, reasoning with general axioms is a NExpTime-complete problem. Finally, we sharpen this result and show that pure concept satisfiability for ALCQI with nominals is NExpTime-complete. Without nominals, this problem is known to be PSpace-complete. ","The Complexity of Reasoning with Cardinality Restrictions and Nominals in Expressive Description Logics" " The local search algorithm WSat is one of the most successful algorithms for solving the satisfiability (SAT) problem. It is notably effective at solving hard Random 3-SAT instances near the so-called `satisfiability threshold', but still shows a peak in search cost near the threshold and large variations in cost over different instances. We make a number of significant contributions to the analysis of WSat on high-cost random instances, using the recently-introduced concept of the backbone of a SAT instance. The backbone is the set of literals which are entailed by an instance. We find that the number of solutions predicts the cost well for small-backbone instances but is much less relevant for the large-backbone instances which appear near the threshold and dominate in the overconstrained region. We show a very strong correlation between search cost and the Hamming distance to the nearest solution early in WSat's search. This pattern leads us to introduce a measure of the backbone fragility of an instance, which indicates how persistent the backbone is as clauses are removed. We propose that high-cost random instances for local search are those with very large backbones which are also backbone-fragile. We suggest that the decay in cost beyond the satisfiability threshold is due to increasing backbone robustness (the opposite of backbone fragility). Our hypothesis makes three correct predictions. First, that the backbone robustness of an instance is negatively correlated with the local search cost when other factors are controlled for. Second, that backbone-minimal instances (which are 3-SAT instances altered so as to be more backbone-fragile) are unusually hard for WSat. Third, that the clauses most often unsatisfied during search are those whose deletion has the most effect on the backbone. In understanding the pathologies of local search methods, we hope to contribute to the development of new and better techniques. ",Backbone Fragility and the Local Search Cost Peak " This paper describes a novel method by which a spoken dialogue system can learn to choose an optimal dialogue strategy from its experience interacting with human users. The method is based on a combination of reinforcement learning and performance modeling of spoken dialogue systems. The reinforcement learning component applies Q-learning (Watkins, 1989), while the performance modeling component applies the PARADISE evaluation framework (Walker et al., 1997) to learn the performance function (reward) used in reinforcement learning. We illustrate the method with a spoken dialogue system named ELVIS (EmaiL Voice Interactive System), that supports access to email over the phone. We conduct a set of experiments for training an optimal dialogue strategy on a corpus of 219 dialogues in which human users interact with ELVIS over the phone. We then test that strategy on a corpus of 18 dialogues. We show that ELVIS can learn to optimize its strategy selection for agent initiative, for reading messages, and for summarizing email folders. ","An Application of Reinforcement Learning to Dialogue Strategy Selection in a Spoken Dialogue System for Email" " We show that for several variations of partially observable Markov decision processes, polynomial-time algorithms for finding control policies are unlikely to or simply don't have guarantees of finding policies within a constant factor or a constant summand of optimal. Here ""unlikely"" means ""unless some complexity classes collapse,"" where the collapses considered are P=NP, P=PSPACE, or P=EXP. Until or unless these collapses are shown to hold, any control-policy designer must choose between such performance guarantees and efficient computation. ","Nonapproximability Results for Partially Observable Markov Decision Processes" " The paper addresses the problem of computing goal orderings, which is one of the longstanding issues in AI planning. It makes two new contributions. First, it formally defines and discusses two different goal orderings, which are called the reasonable and the forced ordering. Both orderings are defined for simple STRIPS operators as well as for more complex ADL operators supporting negation and conditional effects. The complexity of these orderings is investigated and their practical relevance is discussed. Secondly, two different methods to compute reasonable goal orderings are developed. One of them is based on planning graphs, while the other investigates the set of actions directly. Finally, it is shown how the ordering relations, which have been derived for a given set of goals G, can be used to compute a so-called goal agenda that divides G into an ordered set of subgoals. Any planner can then, in principle, use the goal agenda to plan for increasing sets of subgoals. This can lead to an exponential complexity reduction, as the solution to a complex planning problem is found by solving easier subproblems. Since only a polynomial overhead is caused by the goal agenda computation, a potential exists to dramatically speed up planning algorithms as we demonstrate in the empirical evaluation, where we use this method in the IPP planner. ","On Reasonable and Forced Goal Orderings and their Use in an Agenda-Driven Planning Algorithm" " The goal of this research is to develop agents that are adaptive and predictable and timely. At first blush, these three requirements seem contradictory. For example, adaptation risks introducing undesirable side effects, thereby making agents' behavior less predictable. Furthermore, although formal verification can assist in ensuring behavioral predictability, it is known to be time-consuming. Our solution to the challenge of satisfying all three requirements is the following. Agents have finite-state automaton plans, which are adapted online via evolutionary learning (perturbation) operators. To ensure that critical behavioral constraints are always satisfied, agents' plans are first formally verified. They are then reverified after every adaptation. If reverification concludes that constraints are violated, the plans are repaired. The main objective of this paper is to improve the efficiency of reverification after learning, so that agents have a sufficiently rapid response time. We present two solutions: positive results that certain learning operators are a priori guaranteed to preserve useful classes of behavioral assurance constraints (which implies that no reverification is needed for these operators), and efficient incremental reverification algorithms for those learning operators that have negative a priori results. ",Asimovian Adaptive Agents " A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks in the same environment. Explicit bounds are also derived demonstrating that learning multiple tasks within an environment of related tasks can potentially give much better generalization than learning a single task. ",A Model of Inductive Bias Learning " The chief aim of this paper is to propose mean-field approximations for a broad class of Belief networks, of which sigmoid and noisy-or networks can be seen as special cases. The approximations are based on a powerful mean-field theory suggested by Plefka. We show that Saul, Jaakkola and Jordan' s approach is the first order approximation in Plefka's approach, via a variational derivation. The application of Plefka's theory to belief networks is not computationally tractable. To tackle this problem we propose new approximations based on Taylor series. Small scale experiments show that the proposed schemes are attractive. ",Mean Field Methods for a Special Class of Belief Networks " The recent approaches of extending the GRAPHPLAN algorithm to handle more expressive planning formalisms raise the question of what the formal meaning of ""expressive power"" is. We formalize the intuition that expressive power is a measure of how concisely planning domains and plans can be expressed in a particular formalism by introducing the notion of ""compilation schemes"" between planning formalisms. Using this notion, we analyze the expressiveness of a large family of propositional planning formalisms, ranging from basic STRIPS to a formalism with conditional effects, partial state specifications, and propositional formulae in the preconditions. One of the results is that conditional effects cannot be compiled away if plan size should grow only linearly but can be compiled away if we allow for polynomial growth of the resulting plans. This result confirms that the recently proposed extensions to the GRAPHPLAN algorithm concerning conditional effects are optimal with respect to the ""compilability"" framework. Another result is that general propositional formulae cannot be compiled into conditional effects if the plan size should be preserved linearly. This implies that allowing general propositional formulae in preconditions and effect conditions adds another level of difficulty in generating a plan. ","On the Compilability and Expressive Power of Propositional Planning Formalisms" " In order to generate plans for agents with multiple actuators, agent teams, or distributed controllers, we must be able to represent and plan using concurrent actions with interacting effects. This has historically been considered a challenging task requiring a temporal planner with the ability to reason explicitly about time. We show that with simple modifications, the STRIPS action representation language can be used to represent interacting actions. Moreover, algorithms for partial-order planning require only small modifications in order to be applied in such multiagent domains. We demonstrate this fact by developing a sound and complete partial-order planner for planning with concurrent interacting actions, POMP, that extends existing partial-order planners in a straightforward way. These results open the way to the use of partial-order planners for the centralized control of cooperative multiagent systems. ",Partial-Order Planning with Concurrent Interacting Actions " Domain-independent planning is a hard combinatorial problem. Taking into account plan quality makes the task even more difficult. This article introduces Planning by Rewriting (PbR), a new paradigm for efficient high-quality domain-independent planning. PbR exploits declarative plan-rewriting rules and efficient local search techniques to transform an easy-to-generate, but possibly suboptimal, initial plan into a high-quality plan. In addition to addressing the issues of planning efficiency and plan quality, this framework offers a new anytime planning algorithm. We have implemented this planner and applied it to several existing domains. The experimental results show that the PbR approach provides significant savings in planning effort while generating high-quality plans. ",Planning by Rewriting " Partially observable Markov decision processes (POMDPs) have recently become popular among many AI researchers because they serve as a natural model for planning under uncertainty. Value iteration is a well-known algorithm for finding optimal policies for POMDPs. It typically takes a large number of iterations to converge. This paper proposes a method for accelerating the convergence of value iteration. The method has been evaluated on an array of benchmark problems and was found to be very effective: It enabled value iteration to converge after only a few iterations on all the test problems. ","Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes" " We tackle the problem of planning in nondeterministic domains, by presenting a new approach to conformant planning. Conformant planning is the problem of finding a sequence of actions that is guaranteed to achieve the goal despite the nondeterminism of the domain. Our approach is based on the representation of the planning domain as a finite state automaton. We use Symbolic Model Checking techniques, in particular Binary Decision Diagrams, to compactly represent and efficiently search the automaton. In this paper we make the following contributions. First, we present a general planning algorithm for conformant planning, which applies to fully nondeterministic domains, with uncertainty in the initial condition and in action effects. The algorithm is based on a breadth-first, backward search, and returns conformant plans of minimal length, if a solution to the planning problem exists, otherwise it terminates concluding that the problem admits no conformant solution. Second, we provide a symbolic representation of the search space based on Binary Decision Diagrams (BDDs), which is the basis for search techniques derived from symbolic model checking. The symbolic representation makes it possible to analyze potentially large sets of states and transitions in a single computation step, thus providing for an efficient implementation. Third, we present CMBP (Conformant Model Based Planner), an efficient implementation of the data structures and algorithm described above, directly based on BDD manipulations, which allows for a compact representation of the search layers and an efficient implementation of the search steps. Finally, we present an experimental comparison of our approach with the state-of-the-art conformant planners CGP, QBFPLAN and GPT. Our analysis includes all the planning problems from the distribution packages of these systems, plus other problems defined to stress a number of specific factors. Our approach appears to be the most effective: CMBP is strictly more expressive than QBFPLAN and CGP and, in all the problems where a comparison is possible, CMBP outperforms its competitors, sometimes by orders of magnitude. ",Conformant Planning via Symbolic Model Checking " Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AIS-BN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in finite-dimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from different stages of the algorithm. We tested the performance of the AIS-BN algorithm along with two state of the art general purpose sampling algorithms, likelihood weighting (Fung and Chang, 1989; Shachter and Peot, 1989) and self-importance sampling (Shachter and Peot, 1989). We used in our tests three large real Bayesian network models available to the scientific community: the CPCS network (Pradhan et al., 1994), the PathFinder network (Heckerman, Horvitz, and Nathwani, 1990), and the ANDES network (Conati, Gertner, VanLehn, and Druzdzel, 1997), with evidence as unlikely as 10^-41. While the AIS-BN algorithm always performed better than the other two algorithms, in the majority of the test cases it achieved orders of magnitude improvement in precision of the results. Improvement in speed given a desired precision is even more dramatic, although we are unable to report numerical results here, as the other algorithms almost never achieved the precision reached even by the first few iterations of the AIS-BN algorithm. ","AIS-BN: An Adaptive Importance Sampling Algorithm for Evidential Reasoning in Large Bayesian Networks" " In recent years, many improvements to backtracking algorithms for solving constraint satisfaction problems have been proposed. The techniques for improving backtracking algorithms can be conveniently classified as look-ahead schemes and look-back schemes. Unfortunately, look-ahead and look-back schemes are not entirely orthogonal as it has been observed empirically that the enhancement of look-ahead techniques is sometimes counterproductive to the effects of look-back techniques. In this paper, we focus on the relationship between the two most important look-ahead techniques---using a variable ordering heuristic and maintaining a level of local consistency during the backtracking search---and the look-back technique of conflict-directed backjumping (CBJ). We show that there exists a ""perfect"" dynamic variable ordering such that CBJ becomes redundant. We also show theoretically that as the level of local consistency that is maintained in the backtracking search is increased, the less that backjumping will be an improvement. Our theoretical results partially explain why a backtracking algorithm doing more in the look-ahead phase cannot benefit more from the backjumping look-back scheme. Finally, we show empirically that adding CBJ to a backtracking algorithm that maintains generalized arc consistency (GAC), an algorithm that we refer to as GAC-CBJ, can still provide orders of magnitude speedups. Our empirical results contrast with Bessiere and Regin's conclusion (1996) that CBJ is useless to an algorithm that maintains arc consistency. ",Conflict-Directed Backjumping Revisited " This paper presents an implemented system for recognizing the occurrence of events described by simple spatial-motion verbs in short image sequences. The semantics of these verbs is specified with event-logic expressions that describe changes in the state of force-dynamic relations between the participants of the event. An efficient finite representation is introduced for the infinite sets of intervals that occur when describing liquid and semi-liquid events. Additionally, an efficient procedure using this representation is presented for inferring occurrences of compound events, described with event-logic expressions, from occurrences of primitive events. Using force dynamics and event logic to specify the lexical semantics of events allows the system to be more robust than prior systems based on motion profile. ","Grounding the Lexical Semantics of Verbs in Visual Perception using Force Dynamics and Event Logic" " An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier -- especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble's performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees. ",Popular Ensemble Methods: An Empirical Study " This paper presents an evolutionary algorithm with a new goal-sequence domination scheme for better decision support in multi-objective optimization. The approach allows the inclusion of advanced hard/soft priority and constraint information on each objective component, and is capable of incorporating multiple specifications with overlapping or non-overlapping objective functions via logical 'OR' and 'AND' connectives to drive the search towards multiple regions of trade-off. In addition, we propose a dynamic sharing scheme that is simple and adaptively estimated according to the on-line population distribution without needing any a priori parameter setting. Each feature in the proposed algorithm is examined to show its respective contribution, and the performance of the algorithm is compared with other evolutionary optimization methods. It is shown that the proposed algorithm has performed well in the diversity of evolutionary search and uniform distribution of non-dominated individuals along the final trade-offs, without significant computational effort. The algorithm is also applied to the design optimization of a practical servo control system for hard disk drives with a single voice-coil-motor actuator. Results of the evolutionary designed servo control system show a superior closed-loop performance compared to classical PID or RPT approaches. ","An Evolutionary Algorithm with Advanced Goal and Priority Specification for Multi-objective Optimization" " This paper presents GRT, a domain-independent heuristic planning system for STRIPS worlds. GRT solves problems in two phases. In the pre-processing phase, it estimates the distance between each fact and the goals of the problem, in a backward direction. Then, in the search phase, these estimates are used in order to further estimate the distance between each intermediate state and the goals, guiding so the search process in a forward direction and on a best-first basis. The paper presents the benefits from the adoption of opposite directions between the preprocessing and the search phases, discusses some difficulties that arise in the pre-processing phase and introduces techniques to cope with them. Moreover, it presents several methods of improving the efficiency of the heuristic, by enriching the representation and by reducing the size of the problem. Finally, a method of overcoming local optimal states, based on domain axioms, is proposed. According to it, difficult problems are decomposed into easier sub-problems that have to be solved sequentially. The performance results from various domains, including those of the recent planning competitions, show that GRT is among the fastest planners. ","The GRT Planning System: Backward Heuristic Construction in Forward State-Space Planning" " In the recent literature of Artificial Intelligence, an intensive research effort has been spent, for various algebras of qualitative relations used in the representation of temporal and spatial knowledge, on the problem of classifying the computational complexity of reasoning problems for subsets of algebras. The main purpose of these researches is to describe a restricted set of maximal tractable subalgebras, ideally in an exhaustive fashion with respect to the hosting algebras. In this paper we introduce a novel algebra for reasoning about Spatial Congruence, show that the satisfiability problem in the spatial algebra MC-4 is NP-complete, and present a complete classification of tractability in the algebra, based on the individuation of three maximal tractable subclasses, one containing the basic relations. The three algebras are formed by 14, 10 and 9 relations out of 16 which form the full algebra. ",The Complexity of Reasoning about Spatial Congruence " Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a {\em biased} estimate of the gradient of the {\em average reward} in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter $\beta\in [0,1)$ (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter $\beta$ is related to the {\em mixing time} of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward ",Infinite-Horizon Policy-Gradient Estimation " Description Logics (DLs) are suitable, well-known, logics for managing structured knowledge. They allow reasoning about individuals and well defined concepts, i.e., set of individuals with common properties. The experience in using DLs in applications has shown that in many cases we would like to extend their capabilities. In particular, their use in the context of Multimedia Information Retrieval (MIR) leads to the convincement that such DLs should allow the treatment of the inherent imprecision in multimedia object content representation and retrieval. In this paper we will present a fuzzy extension of ALC, combining Zadeh's fuzzy logic with a classical DL. In particular, concepts becomes fuzzy and, thus, reasoning about imprecise concepts is supported. We will define its syntax, its semantics, describe its properties and present a constraint propagation calculus for reasoning in it. ",Reasoning within Fuzzy Description Logics " Top-down induction of decision trees has been observed to suffer from the inadequate functioning of the pruning phase. In particular, it is known that the size of the resulting tree grows linearly with the sample size, even though the accuracy of the tree does not improve. Reduced Error Pruning is an algorithm that has been used as a representative technique in attempts to explain the problems of decision tree learning. In this paper we present analyses of Reduced Error Pruning in three different settings. First we study the basic algorithmic properties of the method, properties that hold independent of the input decision tree and pruning examples. Then we examine a situation that intuitively should lead to the subtree under consideration to be replaced by a leaf node, one in which the class label and attribute values of the pruning examples are independent of each other. This analysis is conducted under two different assumptions. The general analysis shows that the pruning probability of a node fitting pure noise is bounded by a function that decreases exponentially as the size of the tree grows. In a specific analysis we assume that the examples are distributed uniformly to the tree. This assumption lets us approximate the number of subtrees that are pruned because they do not receive any pruning examples. This paper clarifies the different variants of the Reduced Error Pruning algorithm, brings new insight to its algorithmic properties, analyses the algorithm with less imposed assumptions than before, and includes the previously overlooked empty subtrees to the analysis. ",An Analysis of Reduced Error Pruning " This paper investigates the problems arising in the construction of a program to play the game of contract bridge. These problems include both the difficulty of solving the game's perfect information variant, and techniques needed to address the fact that bridge is not, in fact, a perfect information game. GIB, the program being described, involves five separate technical advances: partition search, the practical application of Monte Carlo techniques to realistic problems, a focus on achievable sets to solve problems inherent in the Monte Carlo approach, an extension of alpha-beta pruning from total orders to arbitrary distributive lattices, and the use of squeaky wheel optimization to find approximately optimal solutions to cardplay problems. GIB is currently believed to be of approximately expert caliber, and is currently the strongest computer bridge program in the world. ",GIB: Imperfect Information in a Computationally Challenging Game " Enforcing local consistencies is one of the main features of constraint reasoning. Which level of local consistency should be used when searching for solutions in a constraint network is a basic question. Arc consistency and partial forms of arc consistency have been widely studied, and have been known for sometime through the forward checking or the MAC search algorithms. Until recently, stronger forms of local consistency remained limited to those that change the structure of the constraint graph, and thus, could not be used in practice, especially on large networks. This paper focuses on the local consistencies that are stronger than arc consistency, without changing the structure of the network, i.e., only removing inconsistent values from the domains. In the last five years, several such local consistencies have been proposed by us or by others. We make an overview of all of them, and highlight some relations between them. We compare them both theoretically and experimentally, considering their pruning efficiency and the time required to enforce them. ",Domain Filtering Consistencies " In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem on-line plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network. We then describe an application of the Rao-Blackwellised Particle Filter to the AHMM which allows us to construct an efficient, hybrid inference method for this model. In terms of plan recognition, we propose a novel plan recognition framework based on the AHMM as the plan execution model. The Rao-Blackwellised hybrid inference for AHMM can take advantage of the independence properties inherent in a model of plan execution, leading to an algorithm for online probabilistic plan recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms. We demonstrate the usefulness of the AHMM framework via a behaviour recognition system in a complex spatial environment using distributed video surveillance data. ",Policy Recognition in the Abstract Hidden Markov Model " We describe and evaluate the algorithmic techniques that are used in the FF planning system. Like the HSP system, FF relies on forward state space search, using a heuristic that estimates goal distances by ignoring delete lists. Unlike HSP's heuristic, our method does not assume facts to be independent. We introduce a novel search strategy that combines hill-climbing with systematic search, and we show how other powerful heuristic information can be extracted and used to prune the search space. FF was the most successful automatic planner at the recent AIPS-2000 planning competition. We review the results of the competition, give data for other benchmark domains, and investigate the reasons for the runtime performance of FF compared to HSP. ",The FF Planning System: Fast Plan Generation Through Heuristic Search " The First Trading Agent Competition (TAC) was held from June 22nd to July 8th, 2000. TAC was designed to create a benchmark problem in the complex domain of e-marketplaces and to motivate researchers to apply unique approaches to a common task. This article describes ATTac-2000, the first-place finisher in TAC. ATTac-2000 uses a principled bidding strategy that includes several elements of adaptivity. In addition to the success at the competition, isolated empirical results are presented indicating the robustness and effectiveness of ATTac-2000's adaptive strategy. ",ATTac-2000: An Adaptive Autonomous Bidding Agent " The theoretical properties of qualitative spatial reasoning in the RCC8 framework have been analyzed extensively. However, no empirical investigation has been made yet. Our experiments show that the adaption of the algorithms used for qualitative temporal reasoning can solve large RCC8 instances, even if they are in the phase transition region -- provided that one uses the maximal tractable subsets of RCC8 that have been identified by us. In particular, we demonstrate that the orthogonal combination of heuristic methods is successful in solving almost all apparently hard instances in the phase transition region up to a certain size in reasonable time. ",Efficient Methods for Qualitative Spatial Reasoning " This paper presents our work on development of OWL-driven systems for formal representation and reasoning about terminological knowledge and facts in petrology. The long-term aim of our project is to provide solid foundations for a large-scale integration of various kinds of knowledge, including basic terms, rock classification algorithms, findings and reports. We describe three steps we have taken towards that goal here. First, we develop a semi-automated procedure for transforming a database of igneous rock samples to texts in a controlled natural language (CNL), and then a collection of OWL ontologies. Second, we create an OWL ontology of important petrology terms currently described in natural language thesauri. We describe a prototype of a tool for collecting definitions from domain experts. Third, we present an approach to formalization of current industrial standards for classification of rock samples, which requires linear equations in OWL 2. In conclusion, we discuss a range of opportunities arising from the use of semantic technologies in petrology and outline the future work in this area. ",Towards OWL-based Knowledge Representation in Petrology " This paper discusses a system that accelerates reinforcement learning by using transfer from related tasks. Without such transfer, even if two tasks are very similar at some abstract level, an extensive re-learning effort is required. The system achieves much of its power by transferring parts of previously learned solutions rather than a single complete solution. The system exploits strong features in the multi-dimensional function produced by reinforcement learning in solving a particular task. These features are stable and easy to recognize early in the learning process. They generate a partitioning of the state space and thus the function. The partition is represented as a graph. This is used to index and compose functions stored in a case base to form a close approximation to the solution of the new task. Experiments demonstrate that function composition often produces more than an order of magnitude increase in learning rate compared to a basic reinforcement learning algorithm. ","Accelerating Reinforcement Learning by Composing Solutions of Automatically Identified Subtasks" " We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. definite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, that runs for a class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside algorithm for PCFGs, and the one for singly connected Bayesian networks that have been developed independently in each research field. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can significantly outperform the Inside-Outside algorithm. ",Parameter Learning of Logic Programs for Symbolic-Statistical Modeling " I consider the problem of learning an optimal path graphical model from data and show the problem to be NP-hard for the maximum likelihood and minimum description length approaches and a Bayesian approach. This hardness result holds despite the fact that the problem is a restriction of the polynomially solvable problem of finding the optimal tree graphical model. ",Finding a Path is Harder than Finding a Tree " Simple conceptual graphs are considered as the kernel of most knowledge representation formalisms built upon Sowa's model. Reasoning in this model can be expressed by a graph homomorphism called projection, whose semantics is usually given in terms of positive, conjunctive, existential FOL. We present here a family of extensions of this model, based on rules and constraints, keeping graph homomorphism as the basic operation. We focus on the formal definitions of the different models obtained, including their operational semantics and relationships with FOL, and we analyze the decidability and complexity of the associated problems (consistency and deduction). As soon as rules are involved in reasonings, these problems are not decidable, but we exhibit a condition under which they fall in the polynomial hierarchy. These results extend and complete the ones already published by the authors. Moreover we systematically study the complexity of some particular cases obtained by restricting the form of constraints and/or rules. ","Extensions of Simple Conceptual Graphs: the Complexity of Rules and Constraints" " Fusions are a simple way of combining logics. For normal modal logics, fusions have been investigated in detail. In particular, it is known that, under certain conditions, decidability transfers from the component logics to their fusion. Though description logics are closely related to modal logics, they are not necessarily normal. In addition, ABox reasoning in description logics is not covered by the results from modal logics. In this paper, we extend the decidability transfer results from normal modal logics to a large class of description logics. To cover different description logics in a uniform way, we introduce abstract description systems, which can be seen as a common generalization of description and modal logics, and show the transfer results in this general setting. ",Fusions of Description Logics and Abstract Description Systems " Inductive logic programming, or relational learning, is a powerful paradigm for machine learning or data mining. However, in order for ILP to become practically useful, the efficiency of ILP systems must improve substantially. To this end, the notion of a query pack is introduced: it structures sets of similar queries. Furthermore, a mechanism is described for executing such query packs. A complexity analysis shows that considerable efficiency improvements can be achieved through the use of this query pack execution mechanism. This claim is supported by empirical results obtained by incorporating support for query pack execution in two existing learning systems. ","Improving the Efficiency of Inductive Logic Programming Through the Use of Query Packs" " Recent trends in planning research have led to empirical comparison becoming commonplace. The field has started to settle into a methodology for such comparisons, which for obvious practical reasons requires running a subset of planners on a subset of problems. In this paper, we characterize the methodology and examine eight implicit assumptions about the problems, planners and metrics used in many of these comparisons. The problem assumptions are: PR1) the performance of a general purpose planner should not be penalized/biased if executed on a sampling of problems and domains, PR2) minor syntactic differences in representation do not affect performance, and PR3) problems should be solvable by STRIPS capable planners unless they require ADL. The planner assumptions are: PL1) the latest version of a planner is the best one to use, PL2) default parameter settings approximate good performance, and PL3) time cut-offs do not unduly bias outcome. The metrics assumptions are: M1) performance degrades similarly for each planner when run on degraded runtime environments (e.g., machine platform) and M2) the number of plan steps distinguishes performance. We find that most of these assumptions are not supported empirically; in particular, that planners are affected differently by these assumptions. We conclude with a call to the community to devote research resources to improving the state of the practice and especially to enhancing the available benchmark problems. ",A Critical Assessment of Benchmark Comparison in Planning " An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of ""normal"" examples with only a small percentage of ""abnormal"" or ""interesting"" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy. ",SMOTE: Synthetic Minority Over-sampling Technique " Common wisdom has it that small distinctions in the probabilities (parameters) quantifying a belief network do not matter much for the results of probabilistic queries. Yet, one can develop realistic scenarios under which small variations in network parameters can lead to significant changes in computed queries. A pending theoretical question is then to analytically characterize parameter changes that do or do not matter. In this paper, we study the sensitivity of probabilistic queries to changes in network parameters and prove some tight bounds on the impact that such parameters can have on queries. Our analytic results pinpoint some interesting situations under which parameter changes do or do not matter. These results are important for knowledge engineers as they help them identify influential network parameters. They also help explain some of the previous experimental results and observations with regards to network robustness against parameter changes. ",When do Numbers Really Matter? " Recent years are seeing an increasing need for on-line monitoring of teams of cooperating agents, e.g., for visualization, or performance tracking. However, in monitoring deployed teams, we often cannot rely on the agents to always communicate their state to the monitoring system. This paper presents a non-intrusive approach to monitoring by 'overhearing', where the monitored team's state is inferred (via plan-recognition) from team-members' routine communications, exchanged as part of their coordinated task execution, and observed (overheard) by the monitoring system. Key challenges in this approach include the demanding run-time requirements of monitoring, the scarceness of observations (increasing monitoring uncertainty), and the need to scale-up monitoring to address potentially large teams. To address these, we present a set of complementary novel techniques, exploiting knowledge of the social structures and procedures in the monitored team: (i) an efficient probabilistic plan-recognition algorithm, well-suited for processing communications as observations; (ii) an approach to exploiting knowledge of the team's social behavior to predict future observations during execution (reducing monitoring uncertainty); and (iii) monitoring algorithms that trade expressivity for scalability, representing only certain useful monitoring hypotheses, but allowing for any number of agents and their different activities to be represented in a single coherent entity. We present an empirical evaluation of these techniques, in combination and apart, in monitoring a deployed team of agents, running on machines physically distributed across the country, and engaged in complex, dynamic task execution. We also compare the performance of these techniques to human expert and novice monitors, and show that the techniques presented are capable of monitoring at human-expert levels, despite the difficulty of the task. ",Monitoring Teams by Overhearing: A Multi-Agent Plan-Recognition Approach " Spoken dialogue systems promise efficient and natural access to a large variety of information sources and services from any phone. However, current spoken dialogue systems are deficient in their strategies for preventing, identifying and repairing problems that arise in the conversation. This paper reports results on automatically training a Problematic Dialogue Predictor to predict problematic human-computer dialogues using a corpus of 4692 dialogues collected with the 'How May I Help You' (SM) spoken dialogue system. The Problematic Dialogue Predictor can be immediately applied to the system's decision of whether to transfer the call to a human customer care agent, or be used as a cue to the system's dialogue manager to modify its behavior to repair problems, and even perhaps, to prevent them. We show that a Problematic Dialogue Predictor using automatically-obtainable features from the first two exchanges in the dialogue can predict problematic dialogues 13.2% more accurately than the baseline. ","Automatically Training a Problematic Dialogue Predictor for a Spoken Dialogue System" " Recent advances in the study of voting classification algorithms have brought empirical and theoretical results clearly showing the discrimination power of ensemble classifiers. It has been previously argued that the search of this classification power in the design of the algorithms has marginalized the need to obtain interpretable classifiers. Therefore, the question of whether one might have to dispense with interpretability in order to keep classification strength is being raised in a growing number of machine learning or data mining papers. The purpose of this paper is to study both theoretically and empirically the problem. First, we provide numerous results giving insight into the hardness of the simplicity-accuracy tradeoff for voting classifiers. Then we provide an efficient ""top-down and prune"" induction heuristic, WIDC, mainly derived from recent results on the weak learning and boosting frameworks. It is to our knowledge the first attempt to build a voting classifier as a base formula using the weak learning framework (the one which was previously highly successful for decision tree induction), and not the strong learning framework (as usual for such classifiers with boosting-like approaches). While it uses a well-known induction scheme previously successful in other classes of concept representations, thus making it easy to implement and compare, WIDC also relies on recent or new results we give about particular cases of boosting known as partition boosting and ranking loss boosting. Experimental results on thirty-one domains, most of which readily available, tend to display the ability of WIDC to produce small, accurate, and interpretable decision committees. ","Inducing Interpretable Voting Classifiers without Trading Accuracy for Simplicity: Theoretical Results, Approximation Algorithms" " We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. We then provide a knowledge compilation map, which analyzes a large number of existing target compilation languages according to their succinctness and their polytime transformations and queries. We argue that such analysis is necessary for placing new compilation approaches within the context of existing ones. We also go beyond classical, flat target compilation languages based on CNF and DNF, and consider a richer, nested class based on directed acyclic graphs (such as OBDDs), which we show to include a relatively large number of target compilation languages. ",A Knowledge Compilation Map " The problem of organizing information for multidocument summarization so that the generated summary is coherent has received relatively little attention. While sentence ordering for single document summarization can be determined from the ordering of sentences in the input article, this is not the case for multidocument summarization where summary sentences may be drawn from different input articles. In this paper, we propose a methodology for studying the properties of ordering information in the news genre and describe experiments done on a corpus of multiple acceptable orderings we developed for the task. Based on these experiments, we implemented a strategy for ordering information that combines constraints from chronological order of events and topical relatedness. Evaluation of our augmented algorithm shows a significant improvement of the ordering over two baseline strategies. ","Inferring Strategies for Sentence Ordering in Multidocument News Summarization" " We consider the problem of designing the the utility functions of the utility-maximizing agents in a multi-agent system so that they work synergistically to maximize a global utility. The particular problem domain we explore is the control of network routing by placing agents on all the routers in the network. Conventional approaches to this task have the agents all use the Ideal Shortest Path routing Algorithm (ISPA). We demonstrate that in many cases, due to the side-effects of one agent's actions on another agent's performance, having agents use ISPA's is suboptimal as far as global aggregate cost is concerned, even when they are only used to route infinitesimally small amounts of traffic. The utility functions of the individual agents are not ""aligned"" with the global utility, intuitively speaking. As a particular example of this we present an instance of Braess' paradox in which adding new links to a network whose agents all use the ISPA results in a decrease in overall throughput. We also demonstrate that load-balancing, in which the agents' decisions are collectively made to optimize the global cost incurred by all traffic currently being routed, is suboptimal as far as global cost averaged across time is concerned. This is also due to 'side-effects', in this case of current routing decision on future traffic. The mathematics of Collective Intelligence (COIN) is concerned precisely with the issue of avoiding such deleterious side-effects in multi-agent systems, both over time and space. We present key concepts from that mathematics and use them to derive an algorithm whose ideal version should have better performance than that of having all agents use the ISPA, even in the infinitesimal limit. We present experiments verifying this, and also showing that a machine-learning-based version of this COIN algorithm in which costs are only imprecisely estimated via empirical means (a version potentially applicable in the real world) also outperforms the ISPA, despite having access to less information than does the ISPA. In particular, this COIN algorithm almost always avoids Braess' paradox. ","Collective Intelligence, Data Routing and Braess' Paradox" " This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and context-specific structure in a factored MDP. A central element of our algorithms is a novel linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomial-sized one. One algorithm uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on max-norm, a technique that more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 10^40 states, demonstrating a promising indication of the scalability of our approach, and compare our algorithm to an existing state-of-the-art approach, showing, in some problems, exponential gains in computation time. ",Efficient Solution Algorithms for Factored MDPs " The human intelligence lies in the algorithm, the nature of algorithm lies in the classification, and the classification is equal to outlier detection. A lot of algorithms have been proposed to detect outliers, meanwhile a lot of definitions. Unsatisfying point is that definitions seem vague, which makes the solution an ad hoc one. We analyzed the nature of outliers, and give two clear definitions. We then develop an efficient RDD algorithm, which converts outlier problem to pattern and degree problem. Furthermore, a collapse mechanism was introduced by IIR algorithm, which can be united seamlessly with the RDD algorithm and serve for the final decision. Both algorithms are originated from the study on general AI. The combined edition is named as Pe algorithm, which is the basis of the intelligent decision. Here we introduce longest k-turn subsequence problem and corresponding solution as an example to interpret the function of Pe algorithm in detecting curve-type outliers. We also give a comparison between IIR algorithm and Pe algorithm, where we can get a better understanding at both algorithms. A short discussion about intelligence is added to demonstrate the function of the Pe algorithm. Related experimental results indicate its robustness. ",Intelligent decision: towards interpreting the Pe Algorithm " We present a novel Natural Evolution Strategy (NES) variant, the Rank-One NES (R1-NES), which uses a low rank approximation of the search distribution covariance matrix. The algorithm allows computation of the natural gradient with cost linear in the dimensionality of the parameter space, and excels in solving high-dimensional non-separable problems, including the best result to date on the Rosenbrock function (512 dimensions). ",A Linear Time Natural Evolution Strategy for Non-Separable Functions " Galles and Pearl claimed that ""for recursive models, the causal model framework does not add any restrictions to counterfactuals, beyond those imposed by Lewis's [possible-worlds] framework."" This claim is examined carefully, with the goal of clarifying the exact relationship between causal models and Lewis's framework. Recursive models are shown to correspond precisely to a subclass of (possible-world) counterfactual structures. On the other hand, a slight generalization of recursive models, models where all equations have unique solutions, is shown to be incomparable in expressive power to counterfactual structures, despite the fact that the Galles and Pearl arguments should apply to them as well. The problem with the Galles and Pearl argument is identified: an axiom that they viewed as irrelevant, because it involved disjunction (which was not in their language), is not irrelevant at all. ",From Causal Models To Counterfactual Structures " We look more carefully at the modeling of causality using structural equations. It is clear that the structural equations can have a major impact on the conclusions we draw about causality. In particular, the choice of variables and their values can also have a significant impact on causality. These choices are, to some extent, subjective. We consider what counts as an appropriate choice. More generally, we consider what makes a model an appropriate model, especially if we want to take defaults into account, as was argued is necessary in recent work. ",Actual causation and the art of modeling " Two distinct algorithms are presented to extract (schemata of) resolution proofs from closed tableaux for propositional schemata. The first one handles the most efficient version of the tableau calculus but generates very complex derivations (denoted by rather elaborate rewrite systems). The second one has the advantage that much simpler systems can be obtained, however the considered proof procedure is less efficient. ",Generating Schemata of Resolution Proofs " In the current study we examine an application of the machine learning methods to model the retention constants in the thin layer chromatography (TLC). This problem can be described with hundreds or even thousands of descriptors relevant to various molecular properties, most of them redundant and not relevant for the retention constant prediction. Hence we employed feature selection to significantly reduce the number of attributes. Additionally we have tested application of the bagging procedure to the feature selection. The random forest regression models were built using selected variables. The resulting models have better correlation with the experimental data than the reference models obtained with linear regression. The cross-validation confirms robustness of the models. ","Random forest models of the retention constants in the thin layer chromatography" " Nowadays ontologies present a growing interest in Data Fusion applications. As a matter of fact, the ontologies are seen as a semantic tool for describing and reasoning about sensor data, objects, relations and general domain theories. In addition, uncertainty is perhaps one of the most important characteristics of the data and information handled by Data Fusion. However, the fundamental nature of ontologies implies that ontologies describe only asserted and veracious facts of the world. Different probabilistic, fuzzy and evidential approaches already exist to fill this gap; this paper recaps the most popular tools. However none of the tools meets exactly our purposes. Therefore, we constructed a Dempster-Shafer ontology that can be imported into any specific domain ontology and that enables us to instantiate it in an uncertain manner. We also developed a Java application that enables reasoning about these uncertain ontological instances. ","Uncertainty in Ontologies: Dempster-Shafer Theory for Data Fusion Applications" " Individuals have an intuitive perception of what makes a good coincidence. Though the sensitivity to coincidences has often been presented as resulting from an erroneous assessment of probability, it appears to be a genuine competence, based on non-trivial computations. The model presented here suggests that coincidences occur when subjects perceive complexity drops. Co-occurring events are, together, simpler than if considered separately. This model leads to a possible redefinition of subjective probability. ",Coincidences and the encounter problem: A formal account " The study of opinions, their formation and change, is one of the defining topics addressed by social psychology, but in recent years other disciplines, like computer science and complexity, have tried to deal with this issue. Despite the flourishing of different models and theories in both fields, several key questions still remain unanswered. The understanding of how opinions change and the way they are affected by social influence are challenging issues requiring a thorough analysis of opinion per se but also of the way in which they travel between agents' minds and are modulated by these exchanges. To account for the two-faceted nature of opinions, which are mental entities undergoing complex social processes, we outline a preliminary model in which a cognitive theory of opinions is put forward and it is paired with a formal description of them and of their spreading among minds. Furthermore, investigating social influence also implies the necessity to account for the way in which people change their minds, as a consequence of interacting with other people, and the need to explain the higher or lower persistence of such changes. ","Rooting opinions in the minds: a cognitive model and a formal account of opinions and their dynamics" " The study of opinions, their formation and change, is one of the defining topics addressed by social psychology, but in recent years other disciplines, as computer science and complexity, have addressed this challenge. Despite the flourishing of different models and theories in both fields, several key questions still remain unanswered. The aim of this paper is to challenge the current theories on opinion by putting forward a cognitively grounded model where opinions are described as specific mental representations whose main properties are put forward. A comparison with reputation will be also presented. ",Understanding opinions. A cognitive and formal account " For large, real-world inductive learning problems, the number of training examples often must be limited due to the costs associated with procuring, preparing, and storing the training examples and/or the computational costs associated with learning from them. In such circumstances, one question of practical importance is: if only n training examples can be selected, in what proportion should the classes be represented? In this article we help to answer this question by analyzing, for a fixed training-set size, the relationship between the class distribution of the training data and the performance of classification trees induced from these data. We study twenty-six data sets and, for each, determine the best class distribution for learning. The naturally occurring class distribution is shown to generally perform well when classifier performance is evaluated using undifferentiated error rate (0/1 loss). However, when the area under the ROC curve is used to evaluate classifier performance, a balanced distribution is shown to perform well. Since neither of these choices for class distribution always generates the best-performing classifier, we introduce a budget-sensitive progressive sampling algorithm for selecting training examples based on the class associated with each example. An empirical analysis of this algorithm shows that the class distribution of the resulting training set yields classifiers with good (nearly-optimal) classification performance. ","Learning When Training Data are Costly: The Effect of Class Distribution on Tree Induction" " In recent years research in the planning community has moved increasingly toward s application of planners to realistic problems involving both time and many typ es of resources. For example, interest in planning demonstrated by the space res earch community has inspired work in observation scheduling, planetary rover ex ploration and spacecraft control domains. Other temporal and resource-intensive domains including logistics planning, plant control and manufacturing have also helped to focus the community on the modelling and reasoning issues that must be confronted to make planning technology meet the challenges of application. The International Planning Competitions have acted as an important motivating fo rce behind the progress that has been made in planning since 1998. The third com petition (held in 2002) set the planning community the challenge of handling tim e and numeric resources. This necessitated the development of a modelling langua ge capable of expressing temporal and numeric properties of planning domains. In this paper we describe the language, PDDL2.1, that was used in the competition. We describe the syntax of the language, its formal semantics and the validation of concurrent plans. We observe that PDDL2.1 has considerable modelling power --- exceeding the capabilities of current planning technology --- and presents a number of important challenges to the research community. ",PDDL2.1: An Extension to PDDL for Expressing Temporal Planning Domains " Despite the significant progress in multiagent teamwork, existing research does not address the optimality of its prescriptions nor the complexity of the teamwork problem. Without a characterization of the optimality-complexity tradeoffs, it is impossible to determine whether the assumptions and approximations made by a particular theory gain enough efficiency to justify the losses in overall performance. To provide a tool for use by multiagent researchers in evaluating this tradeoff, we present a unified framework, the COMmunicative Multiagent Team Decision Problem (COM-MTDP). The COM-MTDP model combines and extends existing multiagent theories, such as decentralized partially observable Markov decision processes and economic team theory. In addition to their generality of representation, COM-MTDPs also support the analysis of both the optimality of team performance and the computational complexity of the agents' decision problem. In analyzing complexity, we present a breakdown of the computational complexity of constructing optimal teams under various classes of problem domains, along the dimensions of observability and communication cost. In analyzing optimality, we exploit the COM-MTDP's ability to encode existing teamwork theories and models to encode two instantiations of joint intentions theory taken from the literature. Furthermore, the COM-MTDP model provides a basis for the development of novel team coordination algorithms. We derive a domain-independent criterion for optimal communication and provide a comparative analysis of the two joint intentions instantiations with respect to this optimal policy. We have implemented a reusable, domain-independent software package based on COM-MTDPs to analyze teamwork coordination strategies, and we demonstrate its use by encoding and evaluating the two joint intentions strategies within an example domain. ","The Communicative Multiagent Team Decision Problem: Analyzing Teamwork Theories and Models" " Adjustable autonomy refers to entities dynamically varying their own autonomy, transferring decision-making control to other entities (typically agents transferring control to human users) in key situations. Determining whether and when such transfers-of-control should occur is arguably the fundamental research problem in adjustable autonomy. Previous work has investigated various approaches to addressing this problem but has often focused on individual agent-human interactions. Unfortunately, domains requiring collaboration between teams of agents and humans reveal two key shortcomings of these previous approaches. First, these approaches use rigid one-shot transfers of control that can result in unacceptable coordination failures in multiagent settings. Second, they ignore costs (e.g., in terms of time delays or effects on actions) to an agent's team due to such transfers-of-control. To remedy these problems, this article presents a novel approach to adjustable autonomy, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from an agent to a user or vice versa) and (ii) actions to change an agent's pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high-quality individual decisions to be made with minimal disruption to the coordination of the team. We present a mathematical model of transfer-of-control strategies. The model guides and informs the operationalization of the strategies using Markov Decision Processes, which select an optimal strategy, given an uncertain environment and costs to the individuals and teams. The approach has been carefully evaluated, including via its use in a real-world, deployed multi-agent system that assists a research group in its daily activities. ",Towards Adjustable Autonomy for the Real World " In this paper, we analyze the decision version of the NK landscape model from the perspective of threshold phenomena and phase transitions under two random distributions, the uniform probability model and the fixed ratio model. For the uniform probability model, we prove that the phase transition is easy in the sense that there is a polynomial algorithm that can solve a random instance of the problem with the probability asymptotic to 1 as the problem size tends to infinity. For the fixed ratio model, we establish several upper bounds for the solubility threshold, and prove that random instances with parameters above these upper bounds can be solved polynomially. This, together with our empirical study for random instances generated below and in the phase transition region, suggests that the phase transition of the fixed ratio model is also easy. ",An Analysis of Phase Transition in NK Landscapes " This paper presents an approach to expert-guided subgroup discovery. The main step of the subgroup discovery process, the induction of subgroup descriptions, is performed by a heuristic beam search algorithm, using a novel parametrized definition of rule quality which is analyzed in detail. The other important steps of the proposed subgroup discovery process are the detection of statistically significant properties of selected subgroups and subgroup visualization: statistically significant properties are used to enrich the descriptions of induced subgroups, while the visualization shows subgroup properties in the form of distributions of the numbers of examples in the subgroups. The approach is illustrated by the results obtained for a medical problem of early detection of patient risk groups. ",Expert-Guided Subgroup Discovery: Methodology and Application " Independence -- the study of what is relevant to a given problem of reasoning -- has received an increasing attention from the AI community. In this paper, we consider two basic forms of independence, namely, a syntactic one and a semantic one. We show features and drawbacks of them. In particular, while the syntactic form of independence is computationally easy to check, there are cases in which things that intuitively are not relevant are not recognized as such. We also consider the problem of forgetting, i.e., distilling from a knowledge base only the part that is relevant to the set of queries constructed from a subset of the alphabet. While such process is computationally hard, it allows for a simplification of subsequent reasoning, and can thus be viewed as a form of compilation: once the relevant part of a knowledge base has been extracted, all reasoning tasks to be performed can be simplified. ","Propositional Independence - Formula-Variable Independence and Forgetting" " We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcription and are thus potentially useful in a number of music applications such as adaptive automatic accompaniment, score typesetting and music information retrieval. ",Monte Carlo Methods for Tempo Tracking and Rhythm Quantization " Bayesian belief networks have grown to prominence because they provide compact representations for many problems for which probabilistic inference is appropriate, and there are algorithms to exploit this compactness. The next step is to allow compact representations of the conditional probabilities of a variable given its parents. In this paper we present such a representation that exploits contextual independence in terms of parent contexts; which variables act as parents may depend on the value of other variables. The internal representation is in terms of contextual factors (confactors) that is simply a pair of a context and a table. The algorithm, contextual variable elimination, is based on the standard variable elimination algorithm that eliminates the non-query variables in turn, but when eliminating a variable, the tables that need to be multiplied can depend on the context. This algorithm reduces to standard variable elimination when there is no contextual independence structure to exploit. We show how this can be much more efficient than variable elimination when there is structure to exploit. We explain why this new method can exploit more structure than previous methods for structured belief network inference and an analogous algorithm that uses trees. ",Exploiting Contextual Independence In Probabilistic Inference " In this article we present an algorithm to compute bounds on the marginals of a graphical model. For several small clusters of nodes upper and lower bounds on the marginal values are computed independently of the rest of the network. The range of allowed probability distributions over the surrounding nodes is restricted using earlier computed bounds. As we will show, this can be considered as a set of constraints in a linear programming problem of which the objective function is the marginal probability of the center nodes. In this way knowledge about the maginals of neighbouring clusters is passed to other clusters thereby tightening the bounds on their marginals. We show that sharp bounds can be obtained for undirected and directed graphs that are used for practical applications, but for which exact computations are infeasible. ",Bound Propagation " Policies of Markov Decision Processes (MDPs) determine the next action to execute from the current state and, possibly, the history (the past states). When the number of states is large, succinct representations are often used to compactly represent both the MDPs and the policies in a reduced amount of space. In this paper, some problems related to the size of succinctly represented policies are analyzed. Namely, it is shown that some MDPs have policies that can only be represented in space super-polynomial in the size of the MDP, unless the polynomial hierarchy collapses. This fact motivates the study of the problem of deciding whether a given MDP has a policy of a given size and reward. Since some algorithms for MDPs work by finding a succinct representation of the value function, the problem of deciding the existence of a succinct representation of a value function of a given size and reward is also considered. ",On Polynomial Sized MDP Succinct Policies " We describe a system for specifying the effects of actions. Unlike those commonly used in AI planning, our system uses an action description language that allows one to specify the effects of actions using domain rules, which are state constraints that can entail new action effects from old ones. Declaratively, an action domain in our language corresponds to a nonmonotonic causal theory in the situation calculus. Procedurally, such an action domain is compiled into a set of logical theories, one for each action in the domain, from which fully instantiated successor state-like axioms and STRIPS-like systems are then generated. We expect the system to be a useful tool for knowledge engineers writing action specifications for classical AI planning systems, GOLOG systems, and other systems where formal specifications of actions are needed. ","Compiling Causal Theories to Successor State Axioms and STRIPS-Like Systems" " VHPOP is a partial order causal link (POCL) planner loosely based on UCPOP. It draws from the experience gained in the early to mid 1990's on flaw selection strategies for POCL planning, and combines this with more recent developments in the field of domain independent planning such as distance based heuristics and reachability analysis. We present an adaptation of the additive heuristic for plan space planning, and modify it to account for possible reuse of existing actions in a plan. We also propose a large set of novel flaw selection strategies, and show how these can help us solve more problems than previously possible by POCL planners. VHPOP also supports planning with durative actions by incorporating standard techniques for temporal constraint reasoning. We demonstrate that the same heuristic techniques used to boost the performance of classical POCL planning can be effective in domains with durative actions as well. The result is a versatile heuristic POCL planner competitive with established CSP-based and heuristic state space planners. ",VHPOP: Versatile Heuristic Partial Order Planner " The SHOP2 planning system received one of the awards for distinguished performance in the 2002 International Planning Competition. This paper describes the features of SHOP2 which enabled it to excel in the competition, especially those aspects of SHOP2 that deal with temporal and metric planning domains. ",SHOP2: An HTN Planning System " Hierarchical task decomposition is a method used in many agent systems to organize agent knowledge. This work shows how the combination of a hierarchy and persistent assertions of knowledge can lead to difficulty in maintaining logical consistency in asserted knowledge. We explore the problematic consequences of persistent assumptions in the reasoning process and introduce novel potential solutions. Having implemented one of the possible solutions, Dynamic Hierarchical Justification, its effectiveness is demonstrated with an empirical analysis. ","An Architectural Approach to Ensuring Consistency in Hierarchical Execution" " The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and efficient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an efficient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year. The verification algorithm correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes, resulting in precision of 0.73 and recall of 0.95. We validated the reinduction algorithm on ten Web sources. We were able to successfully reinduce the wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data extraction task. ",Wrapper Maintenance: A Machine Learning Approach " The cognitive research on reputation has shown several interesting properties that can improve both the quality of services and the security in distributed electronic environments. In this paper, the impact of reputation on decision-making under scarcity of information will be shown. First, a cognitive theory of reputation will be presented, then a selection of simulation experimental results from different studies will be discussed. Such results concern the benefits of reputation when agents need to find out good sellers in a virtual market-place under uncertainty and informational cheating. ",Exploiting Reputation in Distributed Virtual Environments " In this paper we examine the application of the random forest classifier for the all relevant feature selection problem. To this end we first examine two recently proposed all relevant feature selection algorithms, both being a random forest wrappers, on a series of synthetic data sets with varying size. We show that reasonable accuracy of predictions can be achieved and that heuristic algorithms that were designed to handle the all relevant problem, have performance that is close to that of the reference ideal algorithm. Then, we apply one of the algorithms to four families of semi-synthetic data sets to assess how the properties of particular data set influence results of feature selection. Finally we test the procedure using a well-known gene expression data set. The relevance of nearly all previously established important genes was confirmed, moreover the relevance of several new ones is discovered. ",The All Relevant Feature Selection using Random Forest " Unary operator domains -- i.e., domains in which operators have a single effect -- arise naturally in many control problems. In its most general form, the problem of STRIPS planning in unary operator domains is known to be as hard as the general STRIPS planning problem -- both are PSPACE-complete. However, unary operator domains induce a natural structure, called the domain's causal graph. This graph relates between the preconditions and effect of each domain operator. Causal graphs were exploited by Williams and Nayak in order to analyze plan generation for one of the controllers in NASA's Deep-Space One spacecraft. There, they utilized the fact that when this graph is acyclic, a serialization ordering over any subgoal can be obtained quickly. In this paper we conduct a comprehensive study of the relationship between the structure of a domain's causal graph and the complexity of planning in this domain. On the positive side, we show that a non-trivial polynomial time plan generation algorithm exists for domains whose causal graph induces a polytree with a constant bound on its node indegree. On the negative side, we show that even plan existence is hard when the graph is a directed-path singly connected DAG. More generally, we show that the number of paths in the causal graph is closely related to the complexity of planning in the associated domain. Finally we relate our results to the question of complexity of planning with serializable subgoals. ",Structure and Complexity in Planning with Unary Operators " Recently, planning based on answer set programming has been proposed as an approach towards realizing declarative planning systems. In this paper, we present the language Kc, which extends the declarative planning language K by action costs. Kc provides the notion of admissible and optimal plans, which are plans whose overall action costs are within a given limit resp. minimum over all plans (i.e., cheapest plans). As we demonstrate, this novel language allows for expressing some nontrivial planning tasks in a declarative way. Furthermore, it can be utilized for representing planning problems under other optimality criteria, such as computing ``shortest'' plans (with the least number of steps), and refinement combinations of cheapest and fastest plans. We study complexity aspects of the language Kc and provide a transformation to logic programs, such that planning problems are solved via answer set programming. Furthermore, we report experimental results on selected problems. Our experience is encouraging that answer set planning may be a valuable approach to expressive planning systems in which intricate planning problems can be naturally specified and solved. ",Answer Set Planning Under Action Costs " In common-interest stochastic games all players receive an identical payoff. Players participating in such games must learn to coordinate with each other in order to receive the highest-possible value. A number of reinforcement learning algorithms have been proposed for this problem, and some have been shown to converge to good solutions in the limit. In this paper we show that using very simple model-based algorithms, much better (i.e., polynomial) convergence rates can be attained. Moreover, our model-based algorithms are guaranteed to converge to the optimal value, unlike many of the existing algorithms. ",Learning to Coordinate Efficiently: A Model-based Approach " SAPA is a domain-independent heuristic forward chaining planner that can handle durative actions, metric resource constraints, and deadline goals. It is designed to be capable of handling the multi-objective nature of metric temporal planning. Our technical contributions include (i) planning-graph based methods for deriving heuristics that are sensitive to both cost and makespan (ii) techniques for adjusting the heuristic estimates to take action interactions and metric resource limitations into account and (iii) a linear time greedy post-processing technique to improve execution flexibility of the solution plans. An implementation of SAPA using many of the techniques presented in this paper was one of the best domain independent planners for domains with metric and temporal constraints in the third International Planning Competition, held at AIPS-02. We describe the technical details of extracting the heuristics and present an empirical evaluation of the current implementation of SAPA. ",SAPA: A Multi-objective Metric Temporal Planner " The recent emergence of heavily-optimized modal decision procedures has highlighted the key role of empirical testing in this domain. Unfortunately, the introduction of extensive empirical tests for modal logics is recent, and so far none of the proposed test generators is very satisfactory. To cope with this fact, we present a new random generation method that provides benefits over previous methods for generating empirical tests. It fixes and much generalizes one of the best-known methods, the random CNF_[]m test, allowing for generating a much wider variety of problems, covering in principle the whole input space. Our new method produces much more suitable test sets for the current generation of modal decision procedures. We analyze the features of the new method by means of an extensive collection of empirical tests. ","A New General Method to Generate Random Modal Formulae for Testing Decision Procedures" " Despite their near dominance, heuristic state search planners still lag behind disjunctive planners in the generation of parallel plans in classical planning. The reason is that directly searching for parallel solutions in state space planners would require the planners to branch on all possible subsets of parallel actions, thus increasing the branching factor exponentially. We present a variant of our heuristic state search planner AltAlt, called AltAltp which generates parallel plans by using greedy online parallelization of partial plans. The greedy approach is significantly informed by the use of novel distance heuristics that AltAltp derives from a graphplan-style planning graph for the problem. While this approach is not guaranteed to provide optimal parallel plans, empirical results show that AltAltp is capable of generating good quality parallel plans at a fraction of the cost incurred by the disjunctive planners. ",AltAltp: Online Parallelization of Plans with Heuristic State Search " We address the problem of propositional logic-based abduction, i.e., the problem of searching for a best explanation for a given propositional observation according to a given propositional knowledge base. We give a general algorithm, based on the notion of projection; then we study restrictions over the representations of the knowledge base and of the query, and find new polynomial classes of abduction problems. ",New Polynomial Classes for Logic-Based Abduction " We present some techniques for planning in domains specified with the recent standard language PDDL2.1, supporting 'durative actions' and numerical quantities. These techniques are implemented in LPG, a domain-independent planner that took part in the 3rd International Planning Competition (IPC). LPG is an incremental, any time system producing multi-criteria quality plans. The core of the system is based on a stochastic local search method and on a graph-based representation called 'Temporal Action Graphs' (TA-graphs). This paper focuses on temporal planning, introducing TA-graphs and proposing some techniques to guide the search in LPG using this representation. The experimental results of the 3rd IPC, as well as further results presented in this paper, show that our techniques can be very effective. Often LPG outperforms all other fully-automated planners of the 3rd IPC in terms of speed to derive a solution, or quality of the solutions that can be produced. ","Planning Through Stochastic Local Search and Temporal Action Graphs in LPG" " TALplanner is a forward-chaining planner that relies on domain knowledge in the shape of temporal logic formulas in order to prune irrelevant parts of the search space. TALplanner recently participated in the third International Planning Competition, which had a clear emphasis on increasing the complexity of the problem domains being used as benchmark tests and the expressivity required to represent these domains in a planning system. Like many other planners, TALplanner had support for some but not all aspects of this increase in expressivity, and a number of changes to the planner were required. After a short introduction to TALplanner, this article describes some of the changes that were made before and during the competition. We also describe the process of introducing suitable domain knowledge for several of the competition domains. ",TALplanner in IPC-2002: Extensions and Control Rules " The automatic generation of decision trees based on off-line reasoning on models of a domain is a reasonable compromise between the advantages of using a model-based approach in technical domains and the constraints imposed by embedded applications. In this paper we extend the approach to deal with temporal information. We introduce a notion of temporal decision tree, which is designed to make use of relevant information as long as it is acquired, and we present an algorithm for compiling such trees from a model-based reasoning system. ","Temporal Decision Trees: Model-based Diagnosis of Dynamic Systems On-Board" " The performance of anytime algorithms can be improved by simultaneously solving several instances of algorithm-problem pairs. These pairs may include different instances of a problem (such as starting from a different initial state), different algorithms (if several alternatives exist), or several runs of the same algorithm (for non-deterministic algorithms). In this paper we present a methodology for designing an optimal scheduling policy based on the statistical characteristics of the algorithms involved. We formally analyze the case where the processes share resources (a single-processor model), and provide an algorithm for optimal scheduling. We analyze, theoretically and empirically, the behavior of our scheduling algorithm for various distribution types. Finally, we present empirical results of applying our scheduling algorithm to the Latin Square problem. ","Optimal Schedules for Parallelizing Anytime Algorithms: The Case of Shared Resources" " Auctions are becoming an increasingly popular method for transacting business, especially over the Internet. This article presents a general approach to building autonomous bidding agents to bid in multiple simultaneous auctions for interacting goods. A core component of our approach learns a model of the empirical price dynamics based on past data and uses the model to analytically calculate, to the greatest extent possible, optimal bids. We introduce a new and general boosting-based algorithm for conditional density estimation problems of this kind, i.e., supervised learning problems in which the goal is to estimate the entire conditional distribution of the real-valued label. This approach is fully implemented as ATTac-2001, a top-scoring agent in the second Trading Agent Competition (TAC-01). We present experiments demonstrating the effectiveness of our boosting-based price predictor relative to several reasonable alternatives. ","Decision-Theoretic Bidding Based on Learned Density Models in Simultaneous, Interacting Auctions" " Planning with numeric state variables has been a challenge for many years, and was a part of the 3rd International Planning Competition (IPC-3). Currently one of the most popular and successful algorithmic techniques in STRIPS planning is to guide search by a heuristic function, where the heuristic is based on relaxing the planning task by ignoring the delete lists of the available actions. We present a natural extension of ``ignoring delete lists'' to numeric state variables, preserving the relevant theoretical properties of the STRIPS relaxation under the condition that the numeric task at hand is ``monotonic''. We then identify a subset of the numeric IPC-3 competition language, ``linear tasks'', where monotonicity can be achieved by pre-processing. Based on that, we extend the algorithms used in the heuristic planning system FF to linear tasks. The resulting system Metric-FF is, according to the IPC-3 results which we discuss, one of the two currently most efficient numeric planners. ","The Metric-FF Planning System: Translating ""Ignoring Delete Lists"" to Numeric State Variables" " Nanson's and Baldwin's voting rules select a winner by successively eliminating candidates with low Borda scores. We show that these rules have a number of desirable computational properties. In particular, with unweighted votes, it is NP-hard to manipulate either rule with one manipulator, whilst with weighted votes, it is NP-hard to manipulate either rule with a small number of candidates and a coalition of manipulators. As only a couple of other voting rules are known to be NP-hard to manipulate with a single manipulator, Nanson's and Baldwin's rules appear to be particularly resistant to manipulation from a theoretical perspective. We also propose a number of approximation methods for manipulating these two rules. Experiments demonstrate that both rules are often difficult to manipulate in practice. These results suggest that elimination style voting rules deserve further study. ",Manipulation of Nanson's and Baldwin's Rules " Search is a major technique for planning. It amounts to exploring a state space of planning domains typically modeled as a directed graph. However, prohibitively large sizes of the search space make search expensive. Developing better heuristic functions has been the main technique for improving search efficiency. Nevertheless, recent studies have shown that improving heuristics alone has certain fundamental limits on improving search efficiency. Recently, a new direction of research called partial order based reduction (POR) has been proposed as an alternative to improving heuristics. POR has shown promise in speeding up searches. POR has been extensively studied in model checking research and is a key enabling technique for scalability of model checking systems. Although the POR theory has been extensively studied in model checking, it has never been developed systematically for planning before. In addition, the conditions for POR in the model checking theory are abstract and not directly applicable in planning. Previous works on POR algorithms for planning did not establish the connection between these algorithms and existing theory in model checking. In this paper, we develop a theory for POR in planning. The new theory we develop connects the stubborn set theory in model checking and POR methods in planning. We show that previous POR algorithms in planning can be explained by the new theory. Based on the new theory, we propose a new, stronger POR algorithm. Experimental results on various planning domains show further search cost reduction using the new algorithm. ",Theory and Algorithms for Partial Order Based Reduction in Planning " Set and multiset variables in constraint programming have typically been represented using subset bounds. However, this is a weak representation that neglects potentially useful information about a set such as its cardinality. For set variables, the length-lex (LL) representation successfully provides information about the length (cardinality) and position in the lexicographic ordering. For multiset variables, where elements can be repeated, we consider richer representations that take into account additional information. We study eight different representations in which we maintain bounds according to one of the eight different orderings: length-(co)lex (LL/LC), variety-(co)lex (VL/VC), length-variety-(co)lex (LVL/LVC), and variety-length-(co)lex (VLL/VLC) orderings. These representations integrate together information about the cardinality, variety (number of distinct elements in the multiset), and position in some total ordering. Theoretical and empirical comparisons of expressiveness and compactness of the eight representations suggest that length-variety-(co)lex (LVL/LVC) and variety-length-(co)lex (VLL/VLC) usually give tighter bounds after constraint propagation. We implement the eight representations and evaluate them against the subset bounds representation with cardinality and variety reasoning. Results demonstrate that they offer significantly better pruning and runtime. ","A Comparison of Lex Bounds for Multiset Variables in Constraint Programming" " This paper reports the outcome of the third in the series of biennial international planning competitions, held in association with the International Conference on AI Planning and Scheduling (AIPS) in 2002. In addition to describing the domains, the planners and the objectives of the competition, the paper includes analysis of the results. The results are analysed from several perspectives, in order to address the questions of comparative performance between planners, comparative difficulty of domains, the degree of agreement between planners about the relative difficulty of individual problem instances and the question of how well planners scale relative to one another over increasingly difficult problems. The paper addresses these questions through statistical analysis of the raw results of the competition, in order to determine which results can be considered to be adequately supported by the data. The paper concludes with a discussion of some challenges for the future of the competition series. ",The 3rd International Planning Competition: Results and Analysis " As computational agents are developed for increasingly complicated e-commerce applications, the complexity of the decisions they face demands advances in artificial intelligence techniques. For example, an agent representing a seller in an auction should try to maximize the seller's profit by reasoning about a variety of possibly uncertain pieces of information, such as the maximum prices various buyers might be willing to pay, the possible prices being offered by competing sellers, the rules by which the auction operates, the dynamic arrival and matching of offers to buy and sell, and so on. A naive application of multiagent reasoning techniques would require the seller's agent to explicitly model all of the other agents through an extended time horizon, rendering the problem intractable for many realistically-sized problems. We have instead devised a new strategy that an agent can use to determine its bid price based on a more tractable Markov chain model of the auction process. We have experimentally identified the conditions under which our new strategy works well, as well as how well it works in comparison to the optimal performance the agent could have achieved had it known the future. Our results show that our new strategy in general performs well, outperforming other tractable heuristic strategies in a majority of experiments, and is particularly effective in a 'seller?s market', where many buy offers are available. ","Use of Markov Chains to Design an Agent Bidding Strategy for Continuous Double Auctions" " This paper presents a new classifier combination technique based on the Dempster-Shafer theory of evidence. The Dempster-Shafer theory of evidence is a powerful method for combining measures of evidence from different classifiers. However, since each of the available methods that estimates the evidence of classifiers has its own limitations, we propose here a new implementation which adapts to training data so that the overall mean square error is minimized. The proposed technique is shown to outperform most available classifier combination methods when tested on three different classification problems. ","A New Technique for Combining Multiple Classifiers using The Dempster-Shafer Theory of Evidence" " Although many algorithms have been designed to construct Bayesian network structures using different approaches and principles, they all employ only two methods: those based on independence criteria, and those based on a scoring function and a search procedure (although some methods combine the two). Within the score+search paradigm, the dominant approach uses local search methods in the space of directed acyclic graphs (DAGs), where the usual choices for defining the elementary modifications (local changes) that can be applied are arc addition, arc deletion, and arc reversal. In this paper, we propose a new local search method that uses a different search space, and which takes account of the concept of equivalence between network structures: restricted acyclic partially directed graphs (RPDAGs). In this way, the number of different configurations of the search space is reduced, thus improving efficiency. Moreover, although the final result must necessarily be a local optimum given the nature of the search method, the topology of the new search space, which avoids making early decisions about the directions of the arcs, may help to find better local optima than those obtained by searching in the DAG space. Detailed results of the evaluation of the proposed search method on several test problems, including the well-known Alarm Monitoring System, are also presented. ","Searching for Bayesian Network Structures in the Space of Restricted Acyclic Partially Directed Graphs" " The size and complexity of software and hardware systems have significantly increased in the past years. As a result, it is harder to guarantee their correct behavior. One of the most successful methods for automated verification of finite-state systems is model checking. Most of the current model-checking systems use binary decision diagrams (BDDs) for the representation of the tested model and in the verification process of its properties. Generally, BDDs allow a canonical compact representation of a boolean function (given an order of its variables). The more compact the BDD is, the better performance one gets from the verifier. However, finding an optimal order for a BDD is an NP-complete problem. Therefore, several heuristic methods based on expert knowledge have been developed for variable ordering. We propose an alternative approach in which the variable ordering algorithm gains 'ordering experience' from training models and uses the learned knowledge for finding good orders. Our methodology is based on offline learning of pair precedence classifiers from training models, that is, learning which variable pair permutation is more likely to lead to a good order. For each training model, a number of training sequences are evaluated. Every training model variable pair permutation is then tagged based on its performance on the evaluated orders. The tagged permutations are then passed through a feature extractor and are given as examples to a classifier creation algorithm. Given a model for which an order is requested, the ordering algorithm consults each precedence classifier and constructs a pair precedence table which is used to create the order. Our algorithm was integrated with SMV, which is one of the most widely used verification systems. Preliminary empirical evaluation of our methodology, using real benchmark models, shows performance that is better than random ordering and is competitive with existing algorithms that use expert knowledge. We believe that in sub-domains of models (alu, caches, etc.) our system will prove even more valuable. This is because it features the ability to learn sub-domain knowledge, something that no other ordering algorithm does. ",Learning to Order BDD Variables in Verification " Supply chain formation is the process of determining the structure and terms of exchange relationships to enable a multilevel, multiagent production activity. We present a simple model of supply chains, highlighting two characteristic features: hierarchical subtask decomposition, and resource contention. To decentralize the formation process, we introduce a market price system over the resources produced along the chain. In a competitive equilibrium for this system, agents choose locally optimal allocations with respect to prices, and outcomes are optimal overall. To determine prices, we define a market protocol based on distributed, progressive auctions, and myopic, non-strategic agent bidding policies. In the presence of resource contention, this protocol produces better solutions than the greedy protocols common in the artificial intelligence and multiagent systems literature. The protocol often converges to high-value supply chains, and when competitive equilibria exist, typically to approximate competitive equilibria. However, complementarities in agent production technologies can cause the protocol to wastefully allocate inputs to agents that do not produce their outputs. A subsequent decommitment phase recovers a significant fraction of the lost surplus. ","Decentralized Supply Chain Formation: A Market Protocol and Competitive Equilibrium Analysis" " Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence. ","CP-nets: A Tool for Representing and Reasoning withConditional Ceteris Paribus Preference Statements" " MAP is the problem of finding a most probable instantiation of a set of variables given evidence. MAP has always been perceived to be significantly harder than the related problems of computing the probability of a variable instantiation Pr, or the problem of computing the most probable explanation (MPE). This paper investigates the complexity of MAP in Bayesian networks. Specifically, we show that MAP is complete for NP^PP and provide further negative complexity results for algorithms based on variable elimination. We also show that MAP remains hard even when MPE and Pr become easy. For example, we show that MAP is NP-complete when the networks are restricted to polytrees, and even then can not be effectively approximated. Given the difficulty of computing MAP exactly, and the difficulty of approximating MAP while providing useful guarantees on the resulting approximation, we investigate best effort approximations. We introduce a generic MAP approximation framework. We provide two instantiations of the framework; one for networks which are amenable to exact inference Pr, and one for networks for which even exact inference is too hard. This allows MAP approximation on networks that are too complex to even exactly solve the easier problems, Pr and MPE. Experimental results indicate that using these approximation algorithms provides much better solutions than standard techniques, and provide accurate MAP estimates in many cases. ",Complexity Results and Approximation Strategies for MAP Explanations " The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization. ","Taming Numbers and Durations in the Model Checking Integrated Planning System" " We propose a formalism for representation of finite languages, referred to as the class of IDL-expressions, which combines concepts that were only considered in isolation in existing formalisms. The suggested applications are in natural language processing, more specifically in surface natural language generation and in machine translation, where a sentence is obtained by first generating a large set of candidate sentences, represented in a compact way, and then by filtering such a set through a parser. We study several formal properties of IDL-expressions and compare this new formalism with more standard ones. We also present a novel parsing algorithm for IDL-expressions and prove a non-trivial upper bound on its time complexity. ","IDL-Expressions: A Formalism for Representing and Parsing Finite Languages in Natural Language Processing" " Hierarchical latent class (HLC) models are tree-structured Bayesian networks where leaf nodes are observed while internal nodes are latent. There are no theoretically well justified model selection criteria for HLC models in particular and Bayesian networks with latent nodes in general. Nonetheless, empirical studies suggest that the BIC score is a reasonable criterion to use in practice for learning HLC models. Empirical studies also suggest that sometimes model selection can be improved if standard model dimension is replaced with effective model dimension in the penalty term of the BIC score. Effective dimensions are difficult to compute. In this paper, we prove a theorem that relates the effective dimension of an HLC model to the effective dimensions of a number of latent class models. The theorem makes it computationally feasible to compute the effective dimensions of large HLC models. The theorem can also be used to compute the effective dimensions of general tree models. ",Effective Dimensions of Hierarchical Latent Class Models " We introduce an abductive method for a coherent integration of independent data-sources. The idea is to compute a list of data-facts that should be inserted to the amalgamated database or retracted from it in order to restore its consistency. This method is implemented by an abductive solver, called Asystem, that applies SLDNFA-resolution on a meta-theory that relates different, possibly contradicting, input databases. We also give a pure model-theoretic analysis of the possible ways to `recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. This allows us to characterize the `recovered databases' in terms of the `preferred' (i.e., most consistent) models of the theory. The outcome is an abductive-based application that is sound and complete with respect to a corresponding model-based, preferential semantics, and -- to the best of our knowledge -- is more expressive (thus more general) than any other implementation of coherent integration of databases. ",Coherent Integration of Databases by Abductive Logic Programming " We present a visually-grounded language understanding model based on a study of how people verbally describe objects in scenes. The emphasis of the model is on the combination of individual word meanings to produce meanings for complex referring expressions. The model has been implemented, and it is able to understand a broad range of spatial referring expressions. We describe our implementation of word level visually-grounded semantics and their embedding in a compositional parsing framework. The implemented system selects the correct referents in response to natural language expressions for a large percentage of test cases. In an analysis of the system's successes and failures we reveal how visual context influences the semantics of utterances and propose future extensions to the model that take such context into account. ",Grounded Semantic Composition for Visual Scenes " The 2002 Trading Agent Competition (TAC) presented a challenging market game in the domain of travel shopping. One of the pivotal issues in this domain is uncertainty about hotel prices, which have a significant influence on the relative cost of alternative trip schedules. Thus, virtually all participants employ some method for predicting hotel prices. We survey approaches employed in the tournament, finding that agents apply an interesting diversity of techniques, taking into account differing sources of evidence bearing on prices. Based on data provided by entrants on their agents' actual predictions in the TAC-02 finals and semifinals, we analyze the relative efficacy of these approaches. The results show that taking into account game-specific information about flight prices is a major distinguishing factor. Machine learning methods effectively induce the relationship between flight and hotel prices from game data, and a purely analytical approach based on competitive equilibrium analysis achieves equal accuracy with no historical data. Employing a new measure of prediction quality, we relate absolute accuracy to bottom-line performance in the game. ",Price Prediction in a Trading Agent Competition " The predominant knowledge-based approach to automated model construction, compositional modelling, employs a set of models of particular functional components. Its inference mechanism takes a scenario describing the constituent interacting components of a system and translates it into a useful mathematical model. This paper presents a novel compositional modelling approach aimed at building model repositories. It furthers the field in two respects. Firstly, it expands the application domain of compositional modelling to systems that can not be easily described in terms of interacting functional components, such as ecological systems. Secondly, it enables the incorporation of user preferences into the model selection process. These features are achieved by casting the compositional modelling problem as an activity-based dynamic preference constraint satisfaction problem, where the dynamic constraints describe the restrictions imposed over the composition of partial models and the preferences correspond to those of the user of the automated modeller. In addition, the preference levels are represented through the use of symbolic values that differ in orders of magnitude. ","Compositional Model Repositories via Dynamic Constraint Satisfaction with Order-of-Magnitude Preferences" " Two major goals in machine learning are the discovery and improvement of solutions to complex problems. In this paper, we argue that complexification, i.e. the incremental elaboration of solutions through adding new structure, achieves both these goals. We demonstrate the power of complexification through the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves increasingly complex neural network architectures. NEAT is applied to an open-ended coevolutionary robot duel domain where robot controllers compete head to head. Because the robot duel domain supports a wide range of strategies, and because coevolution benefits from an escalating arms race, it serves as a suitable testbed for studying complexification. When compared to the evolution of networks with fixed structure, complexifying evolution discovers significantly more sophisticated strategies. The results suggest that in order to discover and improve complex solutions, evolution, and search in general, should be allowed to complexify as well as optimize. ",Competitive Coevolution through Evolutionary Complexification " When writing a constraint program, we have to choose which variables should be the decision variables, and how to represent the constraints on these variables. In many cases, there is considerable choice for the decision variables. Consider, for example, permutation problems in which we have as many values as variables, and each variable takes an unique value. In such problems, we can choose between a primal and a dual viewpoint. In the dual viewpoint, each dual variable represents one of the primal values, whilst each dual value represents one of the primal variables. Alternatively, by means of channelling constraints to link the primal and dual variables, we can have a combined model with both sets of variables. In this paper, we perform an extensive theoretical and empirical study of such primal, dual and combined models for two classes of problems: permutation problems and injection problems. Our results show that it often be advantageous to use multiple viewpoints, and to have constraints which channel between them to maintain consistency. They also illustrate a general methodology for comparing different constraint models. ",Dual Modelling of Permutation and Injection Problems " This is the first of three planned papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high-performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal is to define a representation in which this structure is apparent and can easily be exploited to improve computational performance. This paper is a survey of the work underlying ZAP, and discusses previous attempts to improve the performance of the Davis-Putnam-Logemann-Loveland algorithm by exploiting the structure of the problem being solved. We examine existing ideas including extensions of the Boolean language to allow cardinality constraints, pseudo-Boolean representations, symmetry, and a limited form of quantification. While this paper is intended as a survey, our research results are contained in the two subsequent articles, with the theoretical structure of ZAP described in the second paper in this series, and ZAP's implementation described in the third. ","Generalizing Boolean Satisfiability I: Background and Survey of Existing Work" " We address the problem of finding the shortest path between two points in an unknown real physical environment, where a traveling agent must move around in the environment to explore unknown territory. We introduce the Physical-A* algorithm (PHA*) for solving this problem. PHA* expands all the mandatory nodes that A* would expand and returns the shortest path between the two points. However, due to the physical nature of the problem, the complexity of the algorithm is measured by the traveling effort of the moving agent and not by the number of generated nodes, as in standard A*. PHA* is presented as a two-level algorithm, such that its high level, A*, chooses the next node to be expanded and its low level directs the agent to that node in order to explore it. We present a number of variations for both the high-level and low-level procedures and evaluate their performance theoretically and experimentally. We show that the travel cost of our best variation is fairly close to the optimal travel cost, assuming that the mandatory nodes of A* are known in advance. We then generalize our algorithm to the multi-agent case, where a number of cooperative agents are designed to solve the problem. Specifically, we provide an experimental implementation for such a system. It should be noted that the problem addressed here is not a navigation problem, but rather a problem of finding the shortest path between two points for future usage. ","PHA*: Finding the Shortest Path with A* in An Unknown Physical Environment" " Value iteration is a popular algorithm for finding near optimal policies for POMDPs. It is inefficient due to the need to account for the entire belief space, which necessitates the solution of large numbers of linear programs. In this paper, we study value iteration restricted to belief subsets. We show that, together with properly chosen belief subsets, restricted value iteration yields near-optimal policies and we give a condition for determining whether a given belief subset would bring about savings in space and time. We also apply restricted value iteration to two interesting classes of POMDPs, namely informative POMDPs and near-discernible POMDPs. ",Restricted Value Iteration: Theory and Algorithms " Many researchers in artificial intelligence are beginning to explore the use of soft constraints to express a set of (possibly conflicting) problem requirements. A soft constraint is a function defined on a collection of variables which associates some measure of desirability with each possible combination of values for those variables. However, the crucial question of the computational complexity of finding the optimal solution to a collection of soft constraints has so far received very little attention. In this paper we identify a class of soft binary constraints for which the problem of finding the optimal solution is tractable. In other words, we show that for any given set of such constraints, there exists a polynomial time algorithm to determine the assignment having the best overall combined measure of desirability. This tractable class includes many commonly-occurring soft constraints, such as 'as near as possible' or 'as soon as possible after', as well as crisp constraints such as 'greater than'. Finally, we show that this tractable class is maximal, in the sense that adding any other form of soft binary constraint which is not in the class gives rise to a class of problems which is NP-hard. ",A Maximal Tractable Class of Soft Constraints " Efficient implementations of DPLL with the addition of clause learning are the fastest complete Boolean satisfiability solvers and can handle many significant real-world problems, such as verification, planning and design. Despite its importance, little is known of the ultimate strengths and limitations of the technique. This paper presents the first precise characterization of clause learning as a proof system (CL), and begins the task of understanding its power by relating it to the well-studied resolution proof system. In particular, we show that with a new learning scheme, CL can provide exponentially shorter proofs than many proper refinements of general resolution (RES) satisfying a natural property. These include regular and Davis-Putnam resolution, which are already known to be much stronger than ordinary DPLL. We also show that a slight variant of CL with unlimited restarts is as powerful as RES itself. Translating these analytical results to practice, however, presents a challenge because of the nondeterministic nature of clause learning algorithms. We propose a novel way of exploiting the underlying problem structure, in the form of a high level problem description such as a graph or PDDL specification, to guide clause learning algorithms toward faster solutions. We show that this leads to exponential speed-ups on grid and randomized pebbling problems, as well as substantial improvements on certain ordering formulas. ",Towards Understanding and Harnessing the Potential of Clause Learning " Argumentation is based on the exchange and valuation of interacting arguments, followed by the selection of the most acceptable of them (for example, in order to take a decision, to make a choice). Starting from the framework proposed by Dung in 1995, our purpose is to introduce 'graduality' in the selection of the best arguments, i.e., to be able to partition the set of the arguments in more than the two usual subsets of 'selected' and 'non-selected' arguments in order to represent different levels of selection. Our basic idea is that an argument is all the more acceptable if it can be preferred to its attackers. First, we discuss general principles underlying a 'gradual' valuation of arguments based on their interactions. Following these principles, we define several valuation models for an abstract argumentation system. Then, we introduce 'graduality' in the concept of acceptability of arguments. We propose new acceptability classes and a refinement of existing classes taking advantage of an available 'gradual' valuation. ",Graduality in Argumentation " Inductive learning is based on inferring a general rule from a finite data set and using it to label new data. In transduction one attempts to solve the problem of using a labeled training set to label a set of unlabeled points, which are given to the learner prior to learning. Although transduction seems at the outset to be an easier task than induction, there have not been many provably useful algorithms for transduction. Moreover, the precise relation between induction and transduction has not yet been determined. The main theoretical developments related to transduction were presented by Vapnik more than twenty years ago. One of Vapnik's basic results is a rather tight error bound for transductive classification based on an exact computation of the hypergeometric tail. While tight, this bound is given implicitly via a computational routine. Our first contribution is a somewhat looser but explicit characterization of a slightly extended PAC-Bayesian version of Vapnik's transductive bound. This characterization is obtained using concentration inequalities for the tail of sums of random variables obtained by sampling without replacement. We then derive error bounds for compression schemes such as (transductive) support vector machines and for transduction algorithms based on clustering. The main observation used for deriving these new error bounds and algorithms is that the unlabeled test points, which in the transductive setting are known in advance, can be used in order to construct useful data dependent prior distributions over the hypothesis space. ","Explicit Learning Curves for Transduction and Application to Clustering and Compression Algorithms" " Decentralized control of cooperative systems captures the operation of a group of decision makers that share a single global objective. The difficulty in solving optimally such problems arises when the agents lack full observability of the global state of the system when they operate. The general problem has been shown to be NEXP-complete. In this paper, we identify classes of decentralized control problems whose complexity ranges between NEXP and P. In particular, we study problems characterized by independent transitions, independent observations, and goal-oriented objective functions. Two algorithms are shown to solve optimally useful classes of goal-oriented decentralized processes in polynomial time. This paper also studies information sharing among the decision-makers, which can improve their performance. We distinguish between three ways in which agents can exchange information: indirect communication, direct communication and sharing state features that are not controlled by the agents. Our analysis shows that for every class of problems we consider, introducing direct or indirect communication does not change the worst-case complexity. The results provide a better understanding of the complexity of decentralized control problems that arise in practice and facilitate the development of planning algorithms for these problems. ","Decentralized Control of Cooperative Systems: Categorization and Complexity Analysis" " In this paper, we confront the problem of applying reinforcement learning to agents that perceive the environment through many sensors and that can perform parallel actions using many actuators as is the case in complex autonomous robots. We argue that reinforcement learning can only be successfully applied to this case if strong assumptions are made on the characteristics of the environment in which the learning is performed, so that the relevant sensor readings and motor commands can be readily identified. The introduction of such assumptions leads to strongly-biased learning systems that can eventually lose the generality of traditional reinforcement-learning algorithms. In this line, we observe that, in realistic situations, the reward received by the robot depends only on a reduced subset of all the executed actions and that only a reduced subset of the sensor inputs (possibly different in each situation and for each action) are relevant to predict the reward. We formalize this property in the so called 'categorizability assumption' and we present an algorithm that takes advantage of the categorizability of the environment, allowing a decrease in the learning time with respect to existing reinforcement-learning algorithms. Results of the application of the algorithm to a couple of simulated realistic-robotic problems (landmark-based navigation and the six-legged robot gait generation) are reported to validate our approach and to compare it to existing flat and generalization-based reinforcement-learning approaches. ","Reinforcement Learning for Agents with Many Sensors and Actuators Acting in Categorizable Environments" " We explore a method for computing admissible heuristic evaluation functions for search problems. It utilizes pattern databases, which are precomputed tables of the exact cost of solving various subproblems of an existing problem. Unlike standard pattern database heuristics, however, we partition our problems into disjoint subproblems, so that the costs of solving the different subproblems can be added together without overestimating the cost of solving the original problem. Previously, we showed how to statically partition the sliding-tile puzzles into disjoint groups of tiles to compute an admissible heuristic, using the same partition for each state and problem instance. Here we extend the method and show that it applies to other domains as well. We also present another method for additive heuristics which we call dynamically partitioned pattern databases. Here we partition the problem into disjoint subproblems for each state of the search dynamically. We discuss the pros and cons of each of these methods and apply both methods to three different problem domains: the sliding-tile puzzles, the 4-peg Towers of Hanoi problem, and finding an optimal vertex cover of a graph. We find that in some problem domains, static partitioning is most effective, while in others dynamic partitioning is a better choice. In each of these problem domains, either statically partitioned or dynamically partitioned pattern database heuristics are the best known heuristics for the problem. ",Additive Pattern Database Heuristics " This paper is concerned with algorithms for prediction of discrete sequences over a finite alphabet, using variable order Markov models. The class of such algorithms is large and in principle includes any lossless compression algorithm. We focus on six prominent prediction algorithms, including Context Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic Suffix Trees (PSTs). We discuss the properties of these algorithms and compare their performance using real life sequences from three domains: proteins, English text and music pieces. The comparison is made with respect to prediction quality as measured by the average log-loss. We also compare classification algorithms based on these predictors with respect to a number of large protein classification tasks. Our results indicate that a ""decomposed"" CTW (a variant of the CTW algorithm) and PPM outperform all other algorithms in sequence prediction tasks. Somewhat surprisingly, a different algorithm, which is a modification of the Lempel-Ziv compression algorithm, significantly outperforms all algorithms on the protein classification problems. ",On Prediction Using Variable Order Markov Models " Many known planning tasks have inherent constraints concerning the best order in which to achieve the goals. A number of research efforts have been made to detect such constraints and to use them for guiding search, in the hope of speeding up the planning process. We go beyond the previous approaches by considering ordering constraints not only over the (top-level) goals, but also over the sub-goals that will necessarily arise during planning. Landmarks are facts that must be true at some point in every valid solution plan. We extend Koehler and Hoffmann's definition of reasonable orders between top level goals to the more general case of landmarks. We show how landmarks can be found, how their reasonable orders can be approximated, and how this information can be used to decompose a given planning task into several smaller sub-tasks. Our methodology is completely domain- and planner-independent. The implementation demonstrates that the approach can yield significant runtime performance improvements when used as a control loop around state-of-the-art sub-optimal planning systems, as exemplified by FF and LPG. ",Ordered Landmarks in Planning " Standard value function approaches to finding policies for Partially Observable Markov Decision Processes (POMDPs) are generally considered to be intractable for large models. The intractability of these algorithms is to a large extent a consequence of computing an exact, optimal policy over the entire belief space. However, in real-world POMDP problems, computing the optimal policy for the full belief space is often unnecessary for good control even for problems with complicated policy classes. The beliefs experienced by the controller often lie near a structured, low-dimensional subspace embedded in the high-dimensional belief space. Finding a good approximation to the optimal value function for only this subspace can be much easier than computing the full value function. We introduce a new method for solving large-scale POMDPs by reducing the dimensionality of the belief space. We use Exponential family Principal Components Analysis (Collins, Dasgupta and Schapire, 2002) to represent sparse, high-dimensional belief spaces using small sets of learned features of the belief state. We then plan only in terms of the low-dimensional belief features. By planning in this low-dimensional space, we can find policies for POMDP models that are orders of magnitude larger than models that can be handled by conventional techniques. We demonstrate the use of this algorithm on a synthetic problem and on mobile robot navigation tasks. ",Finding Approximate POMDP solutions Through Belief Compression " We propose a model for errors in sung queries, a variant of the hidden Markov model (HMM). This is a solution to the problem of identifying the degree of similarity between a (typically error-laden) sung query and a potential target in a database of musical works, an important problem in the field of music information retrieval. Similarity metrics are a critical component of query-by-humming (QBH) applications which search audio and multimedia databases for strong matches to oral queries. Our model comprehensively expresses the types of error or variation between target and query: cumulative and non-cumulative local errors, transposition, tempo and tempo changes, insertions, deletions and modulation. The model is not only expressive, but automatically trainable, or able to learn and generalize from query examples. We present results of simulations, designed to assess the discriminatory potential of the model, and tests with real sung queries, to demonstrate relevance to real-world applications. ",A Comprehensive Trainable Error Model for Sung Music Queries " In recent years, there has been much interest in phase transitions of combinatorial problems. Phase transitions have been successfully used to analyze combinatorial optimization problems, characterize their typical-case features and locate the hardest problem instances. In this paper, we study phase transitions of the asymmetric Traveling Salesman Problem (ATSP), an NP-hard combinatorial optimization problem that has many real-world applications. Using random instances of up to 1,500 cities in which intercity distances are uniformly distributed, we empirically show that many properties of the problem, including the optimal tour cost and backbone size, experience sharp transitions as the precision of intercity distances increases across a critical value. Our experimental results on the costs of the ATSP tours and assignment problem agree with the theoretical result that the asymptotic cost of assignment problem is pi ^2 /6 the number of cities goes to infinity. In addition, we show that the average computational cost of the well-known branch-and-bound subtour elimination algorithm for the problem also exhibits a thrashing behavior, transitioning from easy to difficult as the distance precision increases. These results answer positively an open question regarding the existence of phase transitions in the ATSP, and provide guidance on how difficult ATSP problem instances should be generated. ","Phase Transitions and Backbones of the Asymmetric Traveling Salesman Problem" " A time series consists of a series of values or events obtained over repeated measurements in time. Analysis of time series represents and important tool in many application areas, such as stock market analysis, process and quality control, observation of natural phenomena, medical treatments, etc. A vital component in many types of time-series analysis is the choice of an appropriate distance/similarity measure. Numerous measures have been proposed to date, with the most successful ones based on dynamic programming. Being of quadratic time complexity, however, global constraints are often employed to limit the search space in the matrix during the dynamic programming procedure, in order to speed up computation. Furthermore, it has been reported that such constrained measures can also achieve better accuracy. In this paper, we investigate two representative time-series distance/similarity measures based on dynamic programming, Dynamic Time Warping (DTW) and Longest Common Subsequence (LCS), and the effects of global constraints on them. Through extensive experiments on a large number of time-series data sets, we demonstrate how global constrains can significantly reduce the computation time of DTW and LCS. We also show that, if the constraint parameter is tight enough (less than 10-15% of time-series length), the constrained measure becomes significantly different from its unconstrained counterpart, in the sense of producing qualitatively different 1-nearest neighbor graphs. This observation explains the potential for accuracy gains when using constrained measures, highlighting the need for careful tuning of constraint parameters in order to achieve a good trade-off between speed and accuracy. ","The Influence of Global Constraints on Similarity Measures for Time-Series Databases" " We present in this paper our law that there is always a connection present between two entities, with a selfconnection being present at least in each node. An entity is an object, physical or imaginary, that is connected by a path (or connection) and which is important for achieving the desired result of the scenario. In machine learning, we state that for any scenario, a subject entity is always, directly or indirectly, connected and affected by single or multiple independent / dependent entities, and their impact on the subject entity is dependent on various factors falling into the categories such as the existenc ",Law of Connectivity in Machine Learning " The importance of algorithm portfolio techniques for SAT has long been noted, and a number of very successful systems have been devised, including the most successful one --- SATzilla. However, all these systems are quite complex (to understand, reimplement, or modify). In this paper we propose a new algorithm portfolio for SAT that is extremely simple, but in the same time so efficient that it outperforms SATzilla. For a new SAT instance to be solved, our portfolio finds its k-nearest neighbors from the training set and invokes a solver that performs the best at those instances. The main distinguishing feature of our algorithm portfolio is the locality of the selection procedure --- the selection of a SAT solver is based only on few instances similar to the input one. ",Simple Algorithm Portfolio for SAT " The superiority and inferiority ranking (SIR) method is a generation of the well-known PROMETHEE method, which can be more efficient to deal with multi-criterion decision making (MCDM) problem. Intuitionistic fuzzy sets (IFSs), as an important extension of fuzzy sets (IFs), include both membership functions and non-membership functions and can be used to, more precisely describe uncertain information. In real world, decision situations are usually under uncertain environment and involve multiple individuals who have their own points of view on handing of decision problems. In order to solve uncertainty group MCDM problem, we propose a novel intuitionistic fuzzy SIR method in this paper. This approach uses intuitionistic fuzzy aggregation operators and SIR ranking methods to handle uncertain information; integrate individual opinions into group opinions; make decisions on multiple-criterion; and finally structure a specific decision map. The proposed approach is illustrated in a simulation of group decision making problem related to supply chain management. ","A Novel Multicriteria Group Decision Making Approach With Intuitionistic Fuzzy SIR Method" " This volume contains the papers presented at the first edition of the Doctoral Consortium of the 5th International Symposium on Rules (RuleML 2011@IJCAI) held on July 19th, 2011 in Barcelona, as well as the poster session papers of the RuleML 2011@IJCAI main conference. ","Proceedings of the Doctoral Consortium and Poster Session of the 5th International Symposium on Rules (RuleML 2011@IJCAI)" " Informledge System (ILS) is a knowledge network with autonomous nodes and intelligent links that integrate and structure the pieces of knowledge. In this paper, we put forward the strategies for knowledge embedding and retrieval in an ILS. ILS is a powerful knowledge network system dealing with logical storage and connectivity of information units to form knowledge using autonomous nodes and multi-lateral links. In ILS, the autonomous nodes known as Knowledge Network Nodes (KNN)s play vital roles which are not only used in storage, parsing and in forming the multi-lateral linkages between knowledge points but also in helping the realization of intelligent retrieval of linked information units in the form of knowledge. Knowledge built in to the ILS forms the shape of sphere. The intelligence incorporated into the links of a KNN helps in retrieving various knowledge threads from a specific set of KNNs. A developed entity of information realized through KNN forms in to the shape of a knowledge cone ",Knowledge Embedding and Retrieval Strategies in an Informledge System " The proposal of Elisa Marengo's thesis is to extend commitment protocols to explicitly account for temporal regulations. This extension will satisfy two needs: (1) it will allow representing, in a flexible and modular way, temporal regulations with a normative force, posed on the interaction, so as to represent conventions, laws and suchlike; (2) it will allow committing to complex conditions, which describe not only what will be achieved but to some extent also how. These two aspects will be deeply investigated in the proposal of a unified framework, which is part of the ongoing work and will be included in the thesis. ",Extend Commitment Protocols with Temporal Regulations: Why and How " Rule-Based Systems have been in use for decades to solve a variety of problems but not in the sensor informatics domain. Rules aid the aggregation of low-level sensor readings to form a more complete picture of the real world and help to address 10 identified challenges for sensor network middleware. This paper presents the reader with an overview of a system architecture and a pilot application to demonstrate the usefulness of a system integrating rules with sensor middleware. ",Rule-Based Semantic Sensing " Multi-Context Systems are an expressive formalism to model (possibly) non-monotonic information exchange between heterogeneous knowledge bases. Such information exchange, however, often comes with unforseen side-effects leading to violation of constraints, making the system inconsistent, and thus unusable. Although there are many approaches to assess and repair a single inconsistent knowledge base, the heterogeneous nature of Multi-Context Systems poses problems which have not yet been addressed in a satisfying way: How to identify and explain a inconsistency that spreads over multiple knowledge bases with different logical formalisms (e.g., logic programs and ontologies)? What are the causes of inconsistency if inference/information exchange is non-monotonic (e.g., absent information as cause)? How to deal with inconsistency if access to knowledge bases is restricted (e.g., companies exchange information, but do not allow arbitrary modifications to their knowledge bases)? Many traditional approaches solely aim for a consistent system, but automatic removal of inconsistency is not always desireable. Therefore a human operator has to be supported in finding the erroneous parts contributing to the inconsistency. In my thesis those issues will be adressed mainly from a foundational perspective, while our research project also provides algorithms and prototype implementations. ",Advancing Multi-Context Systems by Inconsistency Management " We present a description of the PhD thesis which aims to propose a rule-based query answering method for relational data. In this approach we use an additional knowledge which is represented as a set of rules and describes the source data at concept (ontological) level. Queries are posed in the terms of abstract level. We present two methods. The first one uses hybrid reasoning and the second one exploits only forward chaining. These two methods are demonstrated by the prototypical implementation of the system coupled with the Jess engine. Tests are performed on the knowledge base of the selected economic crimes: fraudulent disbursement and money laundering. ","Rule-based query answering method for a knowledge base of economic crimes" " IT Service Management deals with managing a broad range of items related to complex system environments. As there is both, a close connection to business interests and IT infrastructure, the application of semantic expressions which are seamlessly integrated within applications for managing ITSM environments, can help to improve transparency and profitability. This paper focuses on the challenges regarding the integration of semantics and ontologies within ITSM environments. It will describe the paradigm of relationships and inheritance within complex service trees and will present an approach of ontologically expressing them. Furthermore, the application of SBVR-based rules as executable SQL triggers will be discussed. Finally, the broad range of topics for further research, derived from the findings, will be presented. ","Semantic-ontological combination of Business Rules and Business Processes in IT Service Management" " The study proposes a framework of ONTOlogy-based Group Decision Support System (ONTOGDSS) for decision process which exhibits the complex structure of decision-problem and decision-group. It is capable of reducing the complexity of problem structure and group relations. The system allows decision makers to participate in group decision-making through the web environment, via the ontology relation. It facilitates the management of decision process as a whole, from criteria generation, alternative evaluation, and opinion interaction to decision aggregation. The embedded ontology structure in ONTOGDSS provides the important formal description features to facilitate decision analysis and verification. It examines the software architecture, the selection methods, the decision path, etc. Finally, the ontology application of this system is illustrated with specific real case to demonstrate its potentials towards decision-making development. ",An Ontology-driven Framework for Supporting Complex Decision Process " Fault diagnosis and failure prognosis are essential techniques in improving the safety of many manufacturing systems. Therefore, on-line fault detection and isolation is one of the most important tasks in safety-critical and intelligent control systems. Computational intelligence techniques are being investigated as extension of the traditional fault diagnosis methods. This paper discusses the Temporal Neuro-Fuzzy Systems (TNFS) fault diagnosis within an application study of a manufacturing system. The key issues of finding a suitable structure for detecting and isolating ten realistic actuator faults are described. Within this framework, data-processing interactive software of simulation baptized NEFDIAG (NEuro Fuzzy DIAGnosis) version 1.0 is developed. This software devoted primarily to creation, training and test of a classification Neuro-Fuzzy system of industrial process failures. NEFDIAG can be represented like a special type of fuzzy perceptron, with three layers used to classify patterns and failures. The system selected is the workshop of SCIMAT clinker, cement factory in Algeria. ",A Temporal Neuro-Fuzzy Monitoring System to Manufacturing Systems " Open-text (or open-domain) semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR). Unfortunately, large scale systems cannot be easily machine-learned due to lack of directly supervised data. We propose here a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words, which are mapped to more than 40,000 entities) thanks to a training scheme that combines learning from WordNet and ConceptNet with learning from raw text. The model learns structured embeddings of words, entities and MRs via a multi-task training process operating on these diverse sources of data that integrates all the learnt knowledge into a single system. This work ends up combining methods for knowledge acquisition, semantic parsing, and word-sense disambiguation. Experiments on various tasks indicate that our approach is indeed successful and can form a basis for future more sophisticated systems. ","Towards Open-Text Semantic Parsing via Multi-Task Learning of Structured Embeddings" " Commute Time Distance (CTD) is a random walk based metric on graphs. CTD has found widespread applications in many domains including personalized search, collaborative filtering and making search engines robust against manipulation. Our interest is inspired by the use of CTD as a metric for anomaly detection. It has been shown that CTD can be used to simultaneously identify both global and local anomalies. Here we propose an accurate and efficient approximation for computing the CTD in an incremental fashion in order to facilitate real-time applications. An online anomaly detection algorithm is designed where the CTD of each new arriving data point to any point in the current graph can be estimated in constant time ensuring a real-time response. Moreover, the proposed approach can also be applied in many other applications that utilize commute time distance. ",Online Anomaly Detection Systems Using Incremental Commute Time " The promise of lifted probabilistic inference is to carry out probabilistic inference in a relational probabilistic model without needing to reason about each individual separately (grounding out the representation) by treating the undistinguished individuals as a block. Current exact methods still need to ground out in some cases, typically because the representation of the intermediate results is not closed under the lifted operations. We set out to answer the question as to whether there is some fundamental reason why lifted algorithms would need to ground out undifferentiated individuals. We have two main results: (1) We completely characterize the cases where grounding is polynomial in a population size, and show how we can do lifted inference in time polynomial in the logarithm of the population size for these cases. (2) For the case of no-argument and single-argument parametrized random variables where the grounding is not polynomial in a population size, we present lifted inference which is polynomial in the population size whereas grounding is exponential. Neither of these cases requires reasoning separately about the individuals that are not explicitly mentioned. ",Towards Completely Lifted Search-based Probabilistic Inference " Using a recently proposed model for combinatorial landscapes, Local Optima Networks (LON), we conduct a thorough analysis of two types of instances of the Quadratic Assignment Problem (QAP). This network model is a reduction of the landscape in which the nodes correspond to the local optima, and the edges account for the notion of adjacency between their basins of attraction. The model was inspired by the notion of 'inherent network' of potential energy surfaces proposed in physical-chemistry. The local optima networks extracted from the so called uniform and real-like QAP instances, show features clearly distinguishing these two types of instances. Apart from a clear confirmation that the search difficulty increases with the problem dimension, the analysis provides new confirming evidence explaining why the real-like instances are easier to solve exactly using heuristic search, while the uniform instances are easier to solve approximately. Although the local optima network model is still under development, we argue that it provides a novel view of combinatorial landscapes, opening up the possibilities for new analytical tools and understanding of problem difficulty in combinatorial optimization. ",Local Optima Networks of the Quadratic Assignment Problem " In previous work we have introduced a network-based model that abstracts many details of the underlying landscape and compresses the landscape information into a weighted, oriented graph which we call the local optima network. The vertices of this graph are the local optima of the given fitness landscape, while the arcs are transition probabilities between local optima basins. Here we extend this formalism to neutral fitness landscapes, which are common in difficult combinatorial search spaces. By using two known neutral variants of the NK family (i.e. NKp and NKq) in which the amount of neutrality can be tuned by a parameter, we show that our new definitions of the optima networks and the associated basins are consistent with the previous definitions for the non-neutral case. Moreover, our empirical study and statistical analysis show that the features of neutral landscapes interpolate smoothly between landscapes with maximum neutrality and non-neutral ones. We found some unknown structural differences between the two studied families of neutral landscapes. But overall, the network features studied confirmed that neutrality, in landscapes with percolating neutral networks, may enhance heuristic search. Our current methodology requires the exhaustive enumeration of the underlying search space. Therefore, sampling techniques should be developed before this analysis can have practical implications. We argue, however, that the proposed model offers a new perspective into the problem difficulty of combinatorial optimization problems and may inspire the design of more effective search heuristics. ",Local Optima Networks of NK Landscapes with Neutrality " In this paper, we study the exploration / exploitation trade-off in cellular genetic algorithms. We define a new selection scheme, the centric selection, which is tunable and allows controlling the selective pressure with a single parameter. The equilibrium model is used to study the influence of the centric selection on the selective pressure and a new model which takes into account problem dependent statistics and selective pressure in order to deal with the exploration / exploitation trade-off is proposed: the punctuated equilibria model. Performances on the quadratic assignment problem and NK-Landscapes put in evidence an optimal exploration / exploitation trade-off on both of the classes of problems. The punctuated equilibria model is used to explain these results. ",Centric selection: a way to tune the exploration/exploitation trade-off " Negative Slope Coefficient is an indicator of problem hardness that has been introduced in 2004 and that has returned promising results on a large set of problems. It is based on the concept of fitness cloud and works by partitioning the cloud into a number of bins representing as many different regions of the fitness landscape. The measure is calculated by joining the bins centroids by segments and summing all their negative slopes. In this paper, for the first time, we point out a potential problem of the Negative Slope Coefficient: we study its value for different instances of the well known NK-landscapes and we show how this indicator is dramatically influenced by the minimum number of points contained into a bin. Successively, we formally justify this behavior of the Negative Slope Coefficient and we discuss pros and cons of this measure. ","NK landscapes difficulty and Negative Slope Coefficient: How Sampling Influences the Results" " Effective debugging of ontologies is an important prerequisite for their broad application, especially in areas that rely on everyday users to create and maintain knowledge bases, such as the Semantic Web. In such systems ontologies capture formalized vocabularies of terms shared by its users. However in many cases users have different local views of the domain, i.e. of the context in which a given term is used. Inappropriate usage of terms together with natural complications when formulating and understanding logical descriptions may result in faulty ontologies. Recent ontology debugging approaches use diagnosis methods to identify causes of the faults. In most debugging scenarios these methods return many alternative diagnoses, thus placing the burden of fault localization on the user. This paper demonstrates how the target diagnosis can be identified by performing a sequence of observations, that is, by querying an oracle about entailments of the target ontology. To identify the best query we propose two query selection strategies: a simple ""split-in-half"" strategy and an entropy-based strategy. The latter allows knowledge about typical user errors to be exploited to minimize the number of queries. Our evaluation showed that the entropy-based method significantly reduces the number of required queries compared to the ""split-in-half"" approach. We experimented with different probability distributions of user errors and different qualities of the a-priori probabilities. Our measurements demonstrated the superiority of entropy-based query selection even in cases where all fault probabilities are equal, i.e. where no information about typical user errors is available. ","Interactive ontology debugging: two query strategies for efficient fault localization" " The web of data consists of data published on the web in such a way that they can be interpreted and connected together. It is thus critical to establish links between these data, both for the web of data and for the semantic web that it contributes to feed. We consider here the various techniques developed for that purpose and analyze their commonalities and differences. We propose a general framework and show how the diverse techniques fit in the framework. From this framework we consider the relation between data interlinking and ontology matching. Although, they can be considered similar at a certain level (they both relate formal entities), they serve different purposes, but would find a mutual benefit at collaborating. We thus present a scheme under which it is possible for data linking tools to take advantage of ontology alignments. ",MeLinDa: an interlinking framework for the web of data " Symmetries occur naturally in CSP or SAT problems and are not very difficult to discover, but using them to prune the search space tends to be very challenging. Indeed, this usually requires finding specific elements in a group of symmetries that can be huge, and the problem of their very existence is NP-hard. We formulate such an existence problem as a constraint problem on one variable (the symmetry to be used) ranging over a group, and try to find restrictions that may be solved in polynomial time. By considering a simple form of constraints (restricted by a cardinality k) and the class of groups that have the structure of Fp-vector spaces, we propose a partial algorithm based on linear algebra. This polynomial algorithm always applies when k=p=2, but may fail otherwise as we prove the problem to be NP-hard for all other values of k and p. Experiments show that this approach though restricted should allow for an efficient use of at least some groups of symmetries. We conclude with a few directions to be explored to efficiently solve this problem on the general case. ",Solving Linear Constraints in Elementary Abelian p-Groups of Symmetries " Given a causal model of some domain and a particular story that has taken place in this domain, the problem of actual causation is deciding which of the possible causes for some effect actually caused it. One of the most influential approaches to this problem has been developed by Halpern and Pearl in the context of structural models. In this paper, I argue that this is actually not the best setting for studying this problem. As an alternative, I offer the probabilistic logic programming language of CP-logic. Unlike structural models, CP-logic incorporates the deviant/default distinction that is generally considered an important aspect of actual causation, and it has an explicitly dynamic semantics, which helps to formalize the stories that serve as input to an actual causation problem. ",Actual Causation in CP-logic " This paper investigates under which conditions instantiation-based proof procedures can be combined in a nested way, in order to mechanically construct new instantiation procedures for richer theories. Interesting applications in the field of verification are emphasized, particularly for handling extensions of the theory of arrays. ",Instantiation Schemes for Nested Theories " Automating the design of heuristic search methods is an active research field within computer science, artificial intelligence and operational research. In order to make these methods more generally applicable, it is important to eliminate or reduce the role of the human expert in the process of designing an effective methodology to solve a given computational search problem. Researchers developing such methodologies are often constrained on the number of problem domains on which to test their adaptive, self-configuring algorithms; which can be explained by the inherent difficulty of implementing their corresponding domain specific software components. This paper presents HyFlex, a software framework for the development of cross-domain search methodologies. The framework features a common software interface for dealing with different combinatorial optimisation problems, and provides the algorithm components that are problem specific. In this way, the algorithm designer does not require a detailed knowledge the problem domains, and thus can concentrate his/her efforts in designing adaptive general-purpose heuristic search algorithms. Four hard combinatorial problems are fully implemented (maximum satisfiability, one dimensional bin packing, permutation flow shop and personnel scheduling), each containing a varied set of instance data (including real-world industrial applications) and an extensive set of problem specific heuristics and search operators. The framework forms the basis for the first International Cross-domain Heuristic Search Challenge (CHeSC), and it is currently in use by the international research community. In summary, HyFlex represents a valuable new benchmark of heuristic search generality, with which adaptive cross-domain algorithms are being easily developed, and reliably compared. ",HyFlex: A Benchmark Framework for Cross-domain Heuristic Search " In order to address complex systems, apply pattern recongnition on their evolution could play an key role to understand their dynamics. Global patterns are required to detect emergent concepts and trends, some of them with qualitative nature. Formal Concept Analysis (FCA) is a theory whose goal is to discover and to extract Knowledge from qualitative data. It provides tools for reasoning with implication basis (and association rules). Implications and association rules are usefull to reasoning on previously selected attributes, providing a formal foundation for logical reasoning. In this paper we analyse how to apply FCA reasoning to increase confidence in sports betting, by means of detecting temporal regularities from data. It is applied to build a Knowledge-Based system for confidence reasoning. ",Selecting Attributes for Sport Forecasting using Formal Concept Analysis " Perfectly rational decision-makers maximize expected utility, but crucially ignore the resource costs incurred when determining optimal actions. Here we employ an axiomatic framework for bounded rational decision-making based on a thermodynamic interpretation of resource costs as information costs. This leads to a variational ""free utility"" principle akin to thermodynamical free energy that trades off utility and information costs. We show that bounded optimal control solutions can be derived from this variational principle, which leads in general to stochastic policies. Furthermore, we show that risk-sensitive and robust (minimax) control schemes fall out naturally from this framework if the environment is considered as a bounded rational and perfectly rational opponent, respectively. When resource costs are ignored, the maximum expected utility principle is recovered. ","Information, Utility & Bounded Rationality" " ROC curves and cost curves are two popular ways of visualising classifier performance, finding appropriate thresholds according to the operating condition, and deriving useful aggregated measures such as the area under the ROC curve (AUC) or the area under the optimal cost curve. In this note we present some new findings and connections between ROC space and cost space, by using the expected loss over a range of operating conditions. In particular, we show that ROC curves can be transferred to cost space by means of a very natural way of understanding how thresholds should be chosen, by selecting the threshold such that the proportion of positive predictions equals the operating condition (either in the form of cost proportion or skew). We call these new curves {ROC Cost Curves}, and we demonstrate that the expected loss as measured by the area under these curves is linearly related to AUC. This opens up a series of new possibilities and clarifies the notion of cost curve and its relation to ROC analysis. In addition, we show that for a classifier that assigns the scores in an evenly-spaced way, these curves are equal to the Brier Curves. As a result, this establishes the first clear connection between AUC and the Brier score. ",Technical Note: Towards ROC Curves in Cost Space " OWL 2 has been standardized by the World Wide Web Consortium (W3C) as a family of ontology languages for the Semantic Web. The most expressive of these languages is OWL 2 Full, but to date no reasoner has been implemented for this language. Consistency and entailment checking are known to be undecidable for OWL 2 Full. We have translated a large fragment of the OWL 2 Full semantics into first-order logic, and used automated theorem proving systems to do reasoning based on this theory. The results are promising, and indicate that this approach can be applied in practice for effective OWL reasoning, beyond the capabilities of current Semantic Web reasoners. This is an extended version of a paper with the same title that has been published at CADE 2011, LNAI 6803, pp. 446-460. The extended version provides appendices with additional resources that were used in the reported evaluation. ","Reasoning in the OWL 2 Full Ontology Language using First-Order Automated Theorem Proving" " This paper introduces 'just enough' principles and 'systems engineering' approach to the practice of ontology development to provide a minimal yet complete, lightweight, agile and integrated development process, supportive of stakeholder management and implementation independence. ",'Just Enough' Ontology Engineering " In this paper, we investigate the following question: how could you write such computer programs that can work like conscious beings? The motivation behind this question is that we want to create such applications that can see the future. The aim of this paper is to provide an overall conceptual framework for this new approach to machine consciousness. So we introduce a new programming paradigm called Consciousness Oriented Programming (COP). ",Conscious Machines and Consciousness Oriented Programming " There has been a noticeable shift in the relative composition of the industry in the developed countries in recent years; manufacturing is decreasing while the service sector is becoming more important. However, currently most simulation models for investigating service systems are still built in the same way as manufacturing simulation models, using a process-oriented world view, i.e. they model the flow of passive entities through a system. These kinds of models allow studying aspects of operational management but are not well suited for studying the dynamics that appear in service systems due to human behaviour. For these kinds of studies we require tools that allow modelling the system and entities using an object-oriented world view, where intelligent objects serve as abstract ""actors"" that are goal directed and can behave proactively. In our work we combine process-oriented discrete event simulation modelling and object-oriented agent based simulation modelling to investigate the impact of people management practices on retail productivity. In this paper, we reveal in a series of experiments what impact considering proactivity can have on the output accuracy of simulation models of human centric systems. The model and data we use for this investigation are based on a case study in a UK department store. We show that considering proactivity positively influences the validity of these kinds of models and therefore allows analysts to make better recommendations regarding strategies to apply people management practises. ","A First Approach on Modelling Staff Proactiveness in Retail Simulation Models" " A fact apparently not observed earlier in the literature of nonmonotonic reasoning is that Reiter, in his default logic paper, did not directly formalize informal defaults. Instead, he translated a default into a certain natural language proposition and provided a formalization of the latter. A few years later, Moore noted that propositions like the one used by Reiter are fundamentally different than defaults and exhibit a certain autoepistemic nature. Thus, Reiter had developed his default logic as a formalization of autoepistemic propositions rather than of defaults. The first goal of this paper is to show that some problems of Reiter's default logic as a formal way to reason about informal defaults are directly attributable to the autoepistemic nature of default logic and to the mismatch between informal defaults and the Reiter's formal defaults, the latter being a formal expression of the autoepistemic propositions Reiter used as a representation of informal defaults. The second goal of our paper is to compare the work of Reiter and Moore. While each of them attempted to formalize autoepistemic propositions, the modes of reasoning in their respective logics were different. We revisit Moore's and Reiter's intuitions and present them from the perspective of autotheoremhood, where theories can include propositions referring to the theory's own theorems. We then discuss the formalization of this perspective in the logics of Moore and Reiter, respectively, using the unifying semantic framework for default and autoepistemic logics that we developed earlier. We argue that Reiter's default logic is a better formalization of Moore's intuitions about autoepistemic propositions than Moore's own autoepistemic logic. ","Reiter's Default Logic Is a Logic of Autoepistemic Reasoning And a Good One, Too" " In 1991, Michael Gelfond introduced the language of epistemic specifications. The goal was to develop tools for modeling problems that require some form of meta-reasoning, that is, reasoning over multiple possible worlds. Despite their relevance to knowledge representation, epistemic specifications have received relatively little attention so far. In this paper, we revisit the formalism of epistemic specification. We offer a new definition of the formalism, propose several semantics (one of which, under syntactic restrictions we assume, turns out to be equivalent to the original semantics by Gelfond), derive some complexity results and, finally, show the effectiveness of the formalism for modeling problems requiring meta-reasoning considered recently by Faber and Woltran. All these results show that epistemic specifications deserve much more attention that has been afforded to them so far. ",Revisiting Epistemic Specifications " We discuss the evolution of aspects of nonmonotonic reasoning towards the computational paradigm of answer-set programming (ASP). We give a general overview of the roots of ASP and follow up with the personal perspective on research developments that helped verbalize the main principles of ASP and differentiated it from the classical logic programming. ","Origins of Answer-Set Programming - Some Background And Two Personal Accounts" " UCT, a state-of-the art algorithm for Monte Carlo tree sampling (MCTS), is based on UCB, a sampling policy for the Multi-armed Bandit Problem (MAB) that minimizes the accumulated regret. However, MCTS differs from MAB in that only the final choice, rather than all arm pulls, brings a reward, that is, the simple regret, as opposite to the cumulative regret, must be minimized. This ongoing work aims at applying meta-reasoning techniques to MCTS, which is non-trivial. We begin by introducing policies for multi-armed bandits with lower simple regret than UCB, and an algorithm for MCTS which combines cumulative and simple regret minimization and outperforms UCT. We also develop a sampling scheme loosely based on a myopic version of perfect value of information. Finite-time and asymptotic analysis of the policies is provided, and the algorithms are compared empirically. ",Doing Better Than UCT: Rational Monte Carlo Sampling in Trees " Self-Organizing Maps are commonly used for unsupervised learning purposes. This paper is dedicated to the certain modification of SOM called SOMN (Self-Organizing Mixture Networks) used as a mechanism for representing grayscale digital images. Any grayscale digital image regarded as a distribution function can be approximated by the corresponding Gaussian mixture. In this paper, the use of SOMN is proposed in order to obtain such approximations for input grayscale images in unsupervised manner. ","Self-Organizing Mixture Networks for Representation of Grayscale Digital Images" " Two different conceptions of emergence are reconciled as two instances of the phenomenon of detection. In the process of comparing these two conceptions, we find that the notions of complexity and detection allow us to form a unified definition of emergence that clearly delineates the role of the observer. ",Detection and emergence " The aim of this paper is to announce the release of a novel system for abstract argumentation which is based on decomposition and dynamic programming. We provide first experimental evaluations to show the feasibility of this approach. ",dynPARTIX - A Dynamic Programming Reasoner for Abstract Argumentation " Dung's famous abstract argumentation frameworks represent the core formalism for many problems and applications in the field of argumentation which significantly evolved within the last decade. Recent work in the field has thus focused on implementations for these frameworks, whereby one of the main approaches is to use Answer-Set Programming (ASP). While some of the argumentation semantics can be nicely expressed within the ASP language, others required rather cumbersome encoding techniques. Recent advances in ASP systems, in particular, the metasp optimization frontend for the ASP-package gringo/claspD provides direct commands to filter answer sets satisfying certain subset-minimality (or -maximality) constraints. This allows for much simpler encodings compared to the ones in standard ASP language. In this paper, we experimentally compare the original encodings (for the argumentation semantics based on preferred, semi-stable, and respectively, stage extensions) with new metasp encodings. Moreover, we provide novel encodings for the recently introduced resolution-based grounded semantics. Our experimental results indicate that the metasp approach works well in those cases where the complexity of the encoded problem is adequately mirrored within the metasp approach. ","Making Use of Advances in Answer-Set Programming for Abstract Argumentation Systems" " In a knowledge discovery process, interpretation and evaluation of the mined results are indispensable in practice. In the case of data clustering, however, it is often difficult to see in what aspect each cluster has been formed. This paper proposes a method for automatic and objective characterization or ""verbalization"" of the clusters obtained by mixture models, in which we collect conjunctions of propositions (attribute-value pairs) that help us interpret or evaluate the clusters. The proposed method provides us with a new, in-depth and consistent tool for cluster interpretation/evaluation, and works for various types of datasets including continuous attributes and missing values. Experimental results with a couple of standard datasets exhibit the utility of the proposed method, and the importance of the feedbacks from the interpretation/evaluation step. ","Verbal Characterization of Probabilistic Clusters using Minimal Discriminative Propositions" " A brain-computer interface (BCI) may be used to control a prosthetic or orthotic hand using neural activity from the brain. The core of this sensorimotor BCI lies in the interpretation of the neural information extracted from electroencephalogram (EEG). It is desired to improve on the interpretation of EEG to allow people with neuromuscular disorders to perform daily activities. This paper investigates the possibility of discriminating between the EEG associated with wrist and finger movements. The EEG was recorded from test subjects as they executed and imagined five essential hand movements using both hands. Independent component analysis (ICA) and time-frequency techniques were used to extract spectral features based on event-related (de)synchronisation (ERD/ERS), while the Bhattacharyya distance (BD) was used for feature reduction. Mahalanobis distance (MD) clustering and artificial neural networks (ANN) were used as classifiers and obtained average accuracies of 65 % and 71 % respectively. This shows that EEG discrimination between wrist and finger movements is possible. The research introduces a new combination of motor tasks to BCI research. ","Single-trial EEG Discrimination between Wrist and Finger Movement Imagery and Execution in a Sensorimotor BCI" " We present a constraint-based approach to interactive product configuration. Our configurator tool FdConfig is based on feature models for the representation of the product domain. Such models can be directly mapped into constraint satisfaction problems and dealt with by appropriate constraint solvers. During the interactive configuration process the user generates new constraints as a result of his configuration decisions and even may retract constraints posted earlier. We discuss the configuration process, explain the underlying techniques and show optimizations. ",FdConfig: A Constraint-Based Interactive Product Configurator " Answer-Set Programming (ASP) is an established declarative programming paradigm. However, classical ASP lacks subprogram calls as in procedural programming, and access to external computations (like remote procedure calls) in general. The feature is desired for increasing modularity and---assuming proper access in place---(meta-)reasoning over subprogram results. While HEX-programs extend classical ASP with external source access, they do not support calls of (sub-)programs upfront. We present nested HEX-programs, which extend HEX-programs to serve the desired feature, in a user-friendly manner. Notably, the answer sets of called sub-programs can be individually accessed. This is particularly useful for applications that need to reason over answer sets like belief set merging, user-defined aggregate functions, or preferences of answer sets. ",Nested HEX-Programs " Statistical relational learning techniques have been successfully applied in a wide range of relational domains. In most of these applications, the human designers capitalized on their background knowledge by following a trial-and-error trajectory, where relational features are manually defined by a human engineer, parameters are learned for those features on the training data, the resulting model is validated, and the cycle repeats as the engineer adjusts the set of features. This paper seeks to streamline application development in large relational domains by introducing a light-weight approach that efficiently evaluates relational features on pieces of the relational graph that are streamed to it one at a time. We evaluate our approach on two social media tasks and demonstrate that it leads to more accurate models that are learned faster. ",Structure Selection from Streaming Relational Data " In order to give appropriate semantics to qualitative conditionals of the form ""if A then normally B"", ordinal conditional functions (OCFs) ranking the possible worlds according to their degree of plausibility can be used. An OCF accepting all conditionals of a knowledge base R can be characterized as the solution of a constraint satisfaction problem. We present a high-level, declarative approach using constraint logic programming techniques for solving this constraint satisfaction problem. In particular, the approach developed here supports the generation of all minimal solutions; these minimal solutions are of special interest as they provide a basis for model-based inference from R. ","A Constraint Logic Programming Approach for Computing Ordinal Conditional Functions" " Publishing private data on external servers incurs the problem of how to avoid unwanted disclosure of confidential data. We study a problem of confidentiality in extended disjunctive logic programs and show how it can be solved by extended abduction. In particular, we analyze how credulous non-monotonic reasoning affects confidentiality. ","Confidentiality-Preserving Data Publishing for Credulous Users by Extended Abduction" " In this paper, we present domain-specific languages (DSLs) that we devised for their use in the implementation of a finite domain constraint programming system, available as library(clpfd) in SWI-Prolog and YAP-Prolog. These DSLs are used in propagator selection and constraint reification. In these areas, they lead to concise specifications that are easy to read and reason about. At compilation time, these specifications are translated to Prolog code, reducing interpretative run-time overheads. The devised languages can be used in the implementation of other finite domain constraint solvers as well and may contribute to their correctness, conciseness and efficiency. ","Domain-specific Languages in a Finite Domain Constraint Programming System" " In this work a stand-alone preprocessor for SAT is presented that is able to perform most of the known preprocessing techniques. Preprocessing a formula in SAT is important for performance since redundancy can be removed. The preprocessor is part of the SAT solver riss and is called Coprocessor. Not only riss, but also MiniSat 2.2 benefit from it, because the SatELite preprocessor of MiniSat does not implement recent techniques. By using more advanced techniques, Coprocessor is able to reduce the redundancy in a formula further and improves the overall solving performance. ",Coprocessor - a Standalone SAT Preprocessor " We present an application focused on the design of resilient long-reach passive optical networks. We specifically consider dual-parented networks whereby each customer must be connected to two metro sites via local exchange sites. An important property of such a placement is resilience to single metro node failure. The objective of the application is to determine the optimal position of a set of metro nodes such that the total optical fibre length is minimized. We prove that this problem is NP-Complete. We present two alternative combinatorial optimisation approaches to finding an optimal metro node placement using: a mixed integer linear programming (MIP) formulation of the problem; and, a hybrid approach that uses clustering as a preprocessing step. We consider a detailed case-study based on a network for Ireland. The hybrid approach scales well and finds solutions that are close to optimal, with a runtime that is two orders-of-magnitude better than the MIP model. ","A Combinatorial Optimisation Approach to Designing Dual-Parented Long-Reach Passive Optical Networks" " Artificial general intelligence (AGI) refers to research aimed at tackling the full problem of artificial intelligence, that is, create truly intelligent agents. This sets it apart from most AI research which aims at solving relatively narrow domains, such as character recognition, motion planning, or increasing player satisfaction in games. But how do we know when an agent is truly intelligent? A common point of reference in the AGI community is Legg and Hutter's formal definition of universal intelligence, which has the appeal of simplicity and generality but is unfortunately incomputable. Games of various kinds are commonly used as benchmarks for ""narrow"" AI research, as they are considered to have many important properties. We argue that many of these properties carry over to the testing of general intelligence as well. We then sketch how such testing could practically be carried out. The central part of this sketch is an extension of universal intelligence to deal with finite time, and the use of sampling of the space of games expressed in a suitably biased game description language. ",Measuring Intelligence through Games " We propose a structured approach to the problem of retrieval of images by content and present a description logic that has been devised for the semantic indexing and retrieval of images containing complex objects. As other approaches do, we start from low-level features extracted with image analysis to detect and characterize regions in an image. However, in contrast with feature-based approaches, we provide a syntax to describe segmented regions as basic objects and complex objects as compositions of basic ones. Then we introduce a companion extensional semantics for defining reasoning services, such as retrieval, classification, and subsumption. These services can be used for both exact and approximate matching, using similarity measures. Using our logical approach as a formal specification, we implemented a complete client-server image retrieval system, which allows a user to pose both queries by sketch and queries by example. A set of experiments has been carried out on a testbed of images to assess the retrieval capabilities of the system in comparison with expert users ranking. Results are presented adopting a well-established measure of quality borrowed from textual information retrieval. ",Structured Knowledge Representation for Image Retrieval " Wind energy plays an increasing role in the supply of energy world-wide. The energy output of a wind farm is highly dependent on the weather condition present at the wind farm. If the output can be predicted more accurately, energy suppliers can coordinate the collaborative production of different energy sources more efficiently to avoid costly overproductions. With this paper, we take a computer science perspective on energy prediction based on weather data and analyze the important parameters as well as their correlation on the energy output. To deal with the interaction of the different parameters we use symbolic regression based on the genetic programming tool DataModeler. Our studies are carried out on publicly available weather and energy data for a wind farm in Australia. We reveal the correlation of the different variables for the energy output. The model obtained for energy prediction gives a very reliable prediction of the energy output for newly given weather data. ","Predicting the Energy Output of Wind Farms Based on Weather Data: Important Variables and their Correlation" " We consider the problem of reconstructing vehicle trajectories from sparse sequences of GPS points, for which the sampling interval is between 10 seconds and 2 minutes. We introduce a new class of algorithms, called altogether path inference filter (PIF), that maps GPS data in real time, for a variety of trade-offs and scenarios, and with a high throughput. Numerous prior approaches in map-matching can be shown to be special cases of the path inference filter presented in this article. We present an efficient procedure for automatically training the filter on new data, with or without ground truth observations. The framework is evaluated on a large San Francisco taxi dataset and is shown to improve upon the current state of the art. This filter also provides insights about driving patterns of drivers. The path inference filter has been deployed at an industrial scale inside the Mobile Millennium traffic information system, and is used to map fleets of data in San Francisco, Sacramento, Stockholm and Porto. ","The path inference filter: model-based low-latency map matching of probe vehicle data" " Software agents can be used to automate many of the tedious, time-consuming information processing tasks that humans currently have to complete manually. However, to do so, agent plans must be capable of representing the myriad of actions and control flows required to perform those tasks. In addition, since these tasks can require integrating multiple sources of remote information ? typically, a slow, I/O-bound process ? it is desirable to make execution as efficient as possible. To address both of these needs, we present a flexible software agent plan language and a highly parallel execution system that enable the efficient execution of expressive agent plans. The plan language allows complex tasks to be more easily expressed by providing a variety of operators for flexibly processing the data as well as supporting subplans (for modularity) and recursion (for indeterminate looping). The executor is based on a streaming dataflow model of execution to maximize the amount of operator and data parallelism possible at runtime. We have implemented both the language and executor in a system called THESEUS. Our results from testing THESEUS show that streaming dataflow execution can yield significant speedups over both traditional serial (von Neumann) as well as non-streaming dataflow-style execution that existing software and robot agent execution systems currently support. In addition, we show how plans written in the language we present can represent certain types of subtasks that cannot be accomplished using the languages supported by network query engines. Finally, we demonstrate that the increased expressivity of our plan language does not hamper performance; specifically, we show how data can be integrated from multiple remote sources just as efficiently using our architecture as is possible with a state-of-the-art streaming-dataflow network query engine. ","An Expressive Language and Efficient Execution System for Software Agents" " This work focuses on improving state-of-the-art in stochastic local search (SLS) for solving Boolean satisfiability (SAT) instances arising from real-world industrial SAT application domains. The recently introduced SLS method CRSat has been shown to noticeably improve on previously suggested SLS techniques in solving such real-world instances by combining justification-based local search with limited Boolean constraint propagation on the non-clausal formula representation form of Boolean circuits. In this work, we study possibilities of further improving the performance of CRSat by exploiting circuit-level structural knowledge for developing new search heuristics for CRSat. To this end, we introduce and experimentally evaluate a variety of search heuristics, many of which are motivated by circuit-level heuristics originally developed in completely different contexts, e.g., for electronic design automation applications. To the best of our knowledge, most of the heuristics are novel in the context of SLS for SAT and, more generally, SLS for constraint satisfaction problems. ","Structure-Based Local Search Heuristics for Circuit-Level Boolean Satisfiability" " This paper studies the problem of learning diagnostic policies from training examples. A diagnostic policy is a complete description of the decision-making actions of a diagnostician (i.e., tests followed by a diagnostic decision) for all possible combinations of test results. An optimal diagnostic policy is one that minimizes the expected total cost, which is the sum of measurement costs and misdiagnosis costs. In most diagnostic settings, there is a tradeoff between these two kinds of costs. This paper formalizes diagnostic decision making as a Markov Decision Process (MDP). The paper introduces a new family of systematic search algorithms based on the AO* algorithm to solve this MDP. To make AO* efficient, the paper describes an admissible heuristic that enables AO* to prune large parts of the search space. The paper also introduces several greedy algorithms including some improvements over previously-published methods. The paper then addresses the question of learning diagnostic policies from examples. When the probabilities of diseases and test results are computed from training data, there is a great danger of overfitting. To reduce overfitting, regularizers are integrated into the search algorithms. Finally, the paper compares the proposed methods on five benchmark diagnostic data sets. The studies show that in most cases the systematic search methods produce better diagnostic policies than the greedy methods. In addition, the studies show that for training sets of realistic size, the systematic search algorithms are practical on todays desktop computers. ","Integrating Learning from Examples into the Search for Diagnostic Policies" " Variable elimination is a general technique for constraint processing. It is often discarded because of its high space complexity. However, it can be extremely useful when combined with other techniques. In this paper we study the applicability of variable elimination to the challenging problem of finding still-lifes. We illustrate several alternatives: variable elimination as a stand-alone algorithm, interleaved with search, and as a source of good quality lower bounds. We show that these techniques are the best known option both theoretically and empirically. In our experiments we have been able to solve the n=20 instance, which is far beyond reach with alternative approaches. ","On the Practical use of Variable Elimination in Constraint Optimization Problems: 'Still-life' as a Case Study" " This is the second of three planned papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal is to define a representation in which this structure is apparent and can easily be exploited to improve computational performance. This paper presents the theoretical basis for the ideas underlying ZAP, arguing that existing ideas in this area exploit a single, recurring structure in that multiple database axioms can be obtained by operating on a single axiom using a subgroup of the group of permutations on the literals in the problem. We argue that the group structure precisely captures the general structure at which earlier approaches hinted, and give numerous examples of its use. We go on to extend the Davis-Putnam-Logemann-Loveland inference procedure to this broader setting, and show that earlier computational improvements are either subsumed or left intact by the new method. The third paper in this series discusses ZAPs implementation and presents experimental performance results. ",Generalizing Boolean Satisfiability II: Theory " Stochastic processes that involve the creation of objects and relations over time are widespread, but relatively poorly studied. For example, accurate fault diagnosis in factory assembly processes requires inferring the probabilities of erroneous assembly operations, but doing this efficiently and accurately is difficult. Modeled as dynamic Bayesian networks, these processes have discrete variables with very large domains and extremely high dimensionality. In this paper, we introduce relational dynamic Bayesian networks (RDBNs), which are an extension of dynamic Bayesian networks (DBNs) to first-order logic. RDBNs are a generalization of dynamic probabilistic relational models (DPRMs), which we had proposed in our previous work to model dynamic uncertain domains. We first extend the Rao-Blackwellised particle filtering described in our earlier work to RDBNs. Next, we lift the assumptions associated with Rao-Blackwellization in RDBNs and propose two new forms of particle filtering. The first one uses abstraction hierarchies over the predicates to smooth the particle filters estimates. The second employs kernel density estimation with a kernel function specifically designed for relational domains. Experiments show these two methods greatly outperform standard particle filtering on the task of assembly plan execution monitoring. ",Relational Dynamic Bayesian Networks " We present a uniform non-monotonic solution to the problems of reasoning about action on the basis of an argumentation-theoretic approach. Our theory is provably correct relative to a sensible minimisation policy introduced on top of a temporal propositional logic. Sophisticated problem domains can be formalised in our framework. As much attention of researchers in the field has been paid to the traditional and basic problems in reasoning about actions such as the frame, the qualification and the ramification problems, approaches to these problems within our formalisation lie at heart of the expositions presented in this paper. ",Reasoning about Action: An Argumentation - Theoretic Approach " In this paper we present a new approach to modeling finite set domain constraint problems using Reduced Ordered Binary Decision Diagrams (ROBDDs). We show that it is possible to construct an efficient set domain propagator which compactly represents many set domains and set constraints using ROBDDs. We demonstrate that the ROBDD-based approach provides unprecedented flexibility in modeling constraint satisfaction problems, leading to performance improvements. We also show that the ROBDD-based modeling approach can be extended to the modeling of integer and multiset constraint problems in a straightforward manner. Since domain propagation is not always practical, we also show how to incorporate less strict consistency notions into the ROBDD framework, such as set bounds, cardinality bounds and lexicographic bounds consistency. Finally, we present experimental results that demonstrate the ROBDD-based solver performs better than various more conventional constraint solvers on several standard set constraint problems. ",Solving Set Constraint Satisfaction Problems using ROBDDs " We present a novel approach to the automatic acquisition of taxonomies or concept hierarchies from a text corpus. The approach is based on Formal Concept Analysis (FCA), a method mainly used for the analysis of data, i.e. for investigating and processing explicitly given information. We follow Harris distributional hypothesis and model the context of a certain term as a vector representing syntactic dependencies which are automatically acquired from the text corpus with a linguistic parser. On the basis of this context information, FCA produces a lattice that we convert into a special kind of partial order constituting a concept hierarchy. The approach is evaluated by comparing the resulting concept hierarchies with hand-crafted taxonomies for two domains: tourism and finance. We also directly compare our approach with hierarchical agglomerative clustering as well as with Bi-Section-KMeans as an instance of a divisive clustering algorithm. Furthermore, we investigate the impact of using different measures weighting the contribution of each attribute as well as of applying a particular smoothing technique to cope with data sparseness. ","Learning Concept Hierarchies from Text Corpora using Formal Concept Analysis" " This is the third of three papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high-performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal has been to define a representation in which this structure is apparent and can be exploited to improve computational performance. The first paper surveyed existing work that (knowingly or not) exploited problem structure to improve the performance of satisfiability engines, and the second paper showed that this structure could be understood in terms of groups of permutations acting on individual clauses in any particular Boolean theory. We conclude the series by discussing the techniques needed to implement our ideas, and by reporting on their performance on a variety of problem instances. ",Generalizing Boolean Satisfiability III: Implementation " When dealing with incomplete data in statistical learning, or incomplete observations in probabilistic inference, one needs to distinguish the fact that a certain event is observed from the fact that the observed event has happened. Since the modeling and computational complexities entailed by maintaining this proper distinction are often prohibitive, one asks for conditions under which it can be safely ignored. Such conditions are given by the missing at random (mar) and coarsened at random (car) assumptions. In this paper we provide an in-depth analysis of several questions relating to mar/car assumptions. Main purpose of our study is to provide criteria by which one may evaluate whether a car assumption is reasonable for a particular data collecting or observational process. This question is complicated by the fact that several distinct versions of mar/car assumptions exist. We therefore first provide an overview over these different versions, in which we highlight the distinction between distributional and coarsening variable induced versions. We show that distributional versions are less restrictive and sufficient for most applications. We then address from two different perspectives the question of when the mar/car assumption is warranted. First we provide a static analysis that characterizes the admissibility of the car assumption in terms of the support structure of the joint probability distribution of complete data and incomplete observations. Here we obtain an equivalence characterization that improves and extends a recent result by Grunwald and Halpern. We then turn to a procedural analysis that characterizes the admissibility of the car assumption in terms of procedural models for the actual data (or observation) generating process. The main result of this analysis is that the stronger coarsened completely at random (ccar) condition is arguably the most reasonable assumption, as it alone corresponds to data coarsening procedures that satisfy a natural robustness property. ",Ignorability in Statistical and Probabilistic Inference " Partially observable Markov decision processes (POMDPs) form an attractive and principled framework for agent planning under uncertainty. Point-based approximate techniques for POMDPs compute a policy based on a finite set of points collected in advance from the agents belief space. We present a randomized point-based value iteration algorithm called Perseus. The algorithm performs approximate value backup stages, ensuring that in each backup stage the value of each point in the belief set is improved; the key observation is that a single backup may improve the value of many belief points. Contrary to other point-based methods, Perseus backs up only a (randomly selected) subset of points in the belief set, sufficient for improving the value of each belief point in the set. We show how the same idea can be extended to dealing with continuous action spaces. Experimental results show the potential of Perseus in large scale POMDP problems. ",Perseus: Randomized Point-based Value Iteration for POMDPs " Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter estimation. The resulting representation and algorithms are experimentally evaluated on problems from the domain of bioinformatics. ",Logical Hidden Markov Models " We describe the version of the GPT planner used in the probabilistic track of the 4th International Planning Competition (IPC-4). This version, called mGPT, solves Markov Decision Processes specified in the PPDDL language by extracting and using different classes of lower bounds along with various heuristic-search algorithms. The lower bounds are extracted from deterministic relaxations where the alternative probabilistic effects of an action are mapped into different, independent, deterministic actions. The heuristic-search algorithms use these lower bounds for focusing the updates and delivering a consistent value function over all states reachable from the initial state and the greedy policy. ",mGPT: A Probabilistic Planner Based on Heuristic Search " Despite recent progress in AI planning, many benchmarks remain challenging for current planners. In many domains, the performance of a planner can greatly be improved by discovering and exploiting information about the domain structure that is not explicitly encoded in the initial PDDL formulation. In this paper we present and compare two automated methods that learn relevant information from previous experience in a domain and use it to solve new problem instances. Our methods share a common four-step strategy. First, a domain is analyzed and structural information is extracted, then macro-operators are generated based on the previously discovered structure. A filtering and ranking procedure selects the most useful macro-operators. Finally, the selected macros are used to speed up future searches. We have successfully used such an approach in the fourth international planning competition IPC-4. Our system, Macro-FF, extends Hoffmanns state-of-the-art planner FF 2.3 with support for two kinds of macro-operators, and with engineering enhancements. We demonstrate the effectiveness of our ideas on benchmarks from international planning competitions. Our results indicate a large reduction in search effort in those complex domains where structural information can be inferred. ","Macro-FF: Improving AI Planning with Automatically Learned Macro-Operators" " The Optiplan planning system is the first integer programming-based planner that successfully participated in the international planning competition. This engineering note describes the architecture of Optiplan and provides the integer programming formulation that enabled it to perform reasonably well in the competition. We also touch upon some recent developments that make integer programming encodings significantly more competitive. ",Optiplan: Unifying IP-based and Graph-based Planning " We study an approach to policy selection for large relational Markov Decision Processes (MDPs). We consider a variant of approximate policy iteration (API) that replaces the usual value-function learning step with a learning step in policy space. This is advantageous in domains where good policies are easier to represent and learn than the corresponding value functions, which is often the case for the relational MDPs we are interested in. In order to apply API to such problems, we introduce a relational policy language and corresponding learner. In addition, we introduce a new bootstrapping routine for goal-based planning domains, based on random walks. Such bootstrapping is necessary for many large relational MDPs, where reward is extremely sparse, as API is ineffective in such domains when initialized with an uninformed policy. Our experiments show that the resulting system is able to find good policies for a number of classical planning domains and their stochastic variants by solving them as extremely large relational MDPs. The experiments also point to some limitations of our approach, suggesting future work. ","Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes" " Tabu search is one of the most effective heuristics for locating high-quality solutions to a diverse array of NP-hard combinatorial optimization problems. Despite the widespread success of tabu search, researchers have a poor understanding of many key theoretical aspects of this algorithm, including models of the high-level run-time dynamics and identification of those search space features that influence problem difficulty. We consider these questions in the context of the job-shop scheduling problem (JSP), a domain where tabu search algorithms have been shown to be remarkably effective. Previously, we demonstrated that the mean distance between random local optima and the nearest optimal solution is highly correlated with problem difficulty for a well-known tabu search algorithm for the JSP introduced by Taillard. In this paper, we discuss various shortcomings of this measure and develop a new model of problem difficulty that corrects these deficiencies. We show that Taillards algorithm can be modeled with high fidelity as a simple variant of a straightforward random walk. The random walk model accounts for nearly all of the variability in the cost required to locate both optimal and sub-optimal solutions to random JSPs, and provides an explanation for differences in the difficulty of random versus structured JSPs. Finally, we discuss and empirically substantiate two novel predictions regarding tabu search algorithm behavior. First, the method for constructing the initial solution is highly unlikely to impact the performance of tabu search. Second, tabu tenure should be selected to be as small as possible while simultaneously avoiding search stagnation; values larger than necessary lead to significant degradations in performance. ","Linking Search Space Structure, Run-Time Dynamics, and Problem Difficulty: A Step Toward Demystifying Tabu Search" " Code optimization and high level synthesis can be posed as constraint satisfaction and optimization problems, such as graph coloring used in register allocation. Graph coloring is also used to model more traditional CSPs relevant to AI, such as planning, time-tabling and scheduling. Provably optimal solutions may be desirable for commercial and defense applications. Additionally, for applications such as register allocation and code optimization, naturally-occurring instances of graph coloring are often small and can be solved optimally. A recent wave of improvements in algorithms for Boolean satisfiability (SAT) and 0-1 Integer Linear Programming (ILP) suggests generic problem-reduction methods, rather than problem-specific heuristics, because (1) heuristics may be upset by new constraints, (2) heuristics tend to ignore structure, and (3) many relevant problems are provably inapproximable. Problem reductions often lead to highly symmetric SAT instances, and symmetries are known to slow down SAT solvers. In this work, we compare several avenues for symmetry breaking, in particular when certain kinds of symmetry are present in all generated instances. Our focus on reducing CSPs to SAT allows us to leverage recent dramatic improvement in SAT solvers and automatically benefit from future progress. We can use a variety of black-box SAT solvers without modifying their source code because our symmetry-breaking techniques are static, i.e., we detect symmetries and add symmetry breaking predicates (SBPs) during pre-processing. An important result of our work is that among the types of instance-independent SBPs we studied and their combinations, the simplest and least complete constructions are the most effective. Our experiments also clearly indicate that instance-independent symmetries should mostly be processed together with instance-specific symmetries rather than at the specification level, contrary to what has been suggested in the literature. ",Breaking Instance-Independent Symmetries In Exact Graph Coloring " A decision process in which rewards depend on history rather than merely on the current state is called a decision process with non-Markovian rewards (NMRDP). In decision-theoretic planning, where many desirable behaviours are more naturally expressed as properties of execution sequences rather than as properties of states, NMRDPs form a more natural model than the commonly adopted fully Markovian decision process (MDP) model. While the more tractable solution methods developed for MDPs do not directly apply in the presence of non-Markovian rewards, a number of solution methods for NMRDPs have been proposed in the literature. These all exploit a compact specification of the non-Markovian reward function in temporal logic, to automatically translate the NMRDP into an equivalent MDP which is solved using efficient MDP solution methods. This paper presents NMRDPP (Non-Markovian Reward Decision Process Planner), a software platform for the development and experimentation of methods for decision-theoretic planning with non-Markovian rewards. The current version of NMRDPP implements, under a single interface, a family of methods based on existing as well as new approaches which we describe in detail. These include dynamic programming, heuristic search, and structured methods. Using NMRDPP, we compare the methods and identify certain problem features that affect their performance. NMRDPPs treatment of non-Markovian rewards is inspired by the treatment of domain-specific search control knowledge in the TLPlan planner, which it incorporates as a special case. In the First International Probabilistic Planning Competition, NMRDPP was able to compete and perform well in both the domain-independent and hand-coded tracks, using search control knowledge in the latter. ",Decision-Theoretic Planning with non-Markovian Rewards " Boolean optimization finds a wide range of application domains, that motivated a number of different organizations of Boolean optimizers since the mid 90s. Some of the most successful approaches are based on iterative calls to an NP oracle, using either linear search, binary search or the identification of unsatisfiable sub-formulas. The increasing use of Boolean optimizers in practical settings raises the question of confidence in computed results. For example, the issue of confidence is paramount in safety critical settings. One way of increasing the confidence of the results computed by Boolean optimizers is to develop techniques for validating the results. Recent work studied the validation of Boolean optimizers based on branch-and-bound search. This paper complements existing work, and develops methods for validating Boolean optimizers that are based on iterative calls to an NP oracle. This entails implementing solutions for validating both satisfiable and unsatisfiable answers from the NP oracle. The work described in this paper can be applied to a wide range of Boolean optimizers, that find application in Pseudo-Boolean Optimization and in Maximum Satisfiability. Preliminary experimental results indicate that the impact of the proposed method in overall performance is negligible. ",On Validating Boolean Optimizers " The article presents a study on the biobjective inventory routing problem. Contrary to most previous research, the problem is treated as a true multi-objective optimization problem, with the goal of identifying Pareto-optimal solutions. Due to the hardness of the problem at hand, a reference point based optimization approach is presented and implemented into an optimization and decision support system, which allows for the computation of a true subset of the optimal outcomes. Experimental investigation involving local search metaheuristics are conducted on benchmark data, and numerical results are reported and analyzed. ","On the use of reference points for the biobjective Inventory Routing Problem" " Variable neighborhood search (VNS) is a metaheuristic for solving optimization problems based on a simple principle: systematic changes of neighborhoods within the search, both in the descent to local minima and in the escape from the valleys which contain them. Designing these neighborhoods and applying them in a meaningful fashion is not an easy task. Moreover, an appropriate order in which they are applied must be determined. In this paper we attempt to investigate this issue. Assume that we are given an optimization problem that is intended to be solved by applying the VNS scheme, how many and which types of neighborhoods should be investigated and what could be appropriate selection criteria to apply these neighborhoods. More specifically, does it pay to ""look ahead"" (see, e.g., in the context of VNS and GRASP) when attempting to switch from one neighborhood to another? ",Neigborhood Selection in Variable Neighborhood Search " In this paper we demonstrate that two common problems in Machine Learning---imbalanced and overlapping data distributions---do not have independent effects on the performance of SVM classifiers. This result is notable since it shows that a model of either of these factors must account for the presence of the other. Our study of the relationship between these problems has lead to the discovery of a previously unreported form of ""covert"" overfitting which is resilient to commonly used empirical regularization techniques. We demonstrate the existance of this covert phenomenon through several methods based around the parametric regularization of trained SVMs. Our findings in this area suggest a possible approach to quantifying overlap in real world data sets. ","A Characterization of the Combined Effects of Overlap and Imbalance on the SVM Classifier" " In the theory of belief functions, many measures of uncertainty have been introduced. However, it is not always easy to understand what these measures really try to represent. In this paper, we re-interpret some measures of uncertainty in the theory of belief functions. We present some interests and drawbacks of the existing measures. On these observations, we introduce a measure of contradiction. Therefore, we present some degrees of non-specificity and Bayesianity of a mass. We propose a degree of specificity based on the distance between a mass and its most specific associated mass. We also show how to use the degree of specificity to measure the specificity of a fusion rule. Illustrations on simple examples are given. ","Contradiction measures and specificity degrees of basic belief assignments" " We discuss an attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of perception, the model consists of two interacting pathways: identity and control, intended to mirror the what and where pathways in neuroscience models. The identity pathway models object appearance and performs classification using deep (factored)-Restricted Boltzmann Machines. At each point in time the observations consist of foveated images, with decaying resolution toward the periphery of the gaze. The control pathway models the location, orientation, scale and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the control pathway, we encounter an attentional mechanism that learns to select gazes so as to minimize tracking uncertainty. Unlike in our previous work, we introduce gaze selection strategies which operate in the presence of partial information and on a continuous action space. We show that a straightforward extension of the existing approach to the partial information setting results in poor performance, and we propose an alternative method based on modeling the reward surface as a Gaussian Process. This approach gives good performance in the presence of partial information and allows us to expand the action space from a small, discrete set of fixation points to a continuous domain. ",Learning where to Attend with Deep Architectures for Image Tracking " Several rules for social choice are examined from a unifying point of view that looks at them as procedures for revising a system of degrees of belief in accordance with certain specified logical constraints. Belief is here a social attribute, its degrees being measured by the fraction of people who share a given opinion. Different known rules and some new ones are obtained depending on which particular constraints are assumed. These constraints allow to model different notions of choiceness. In particular, we give a new method to deal with approval-disapproval-preferential voting. ",Social choice rules driven by propositional logic " We investigate training and using Gaussian kernel SVMs by approximating the kernel with an explicit finite- dimensional polynomial feature representation based on the Taylor expansion of the exponential. Although not as efficient as the recently-proposed random Fourier features [Rahimi and Recht, 2007] in terms of the number of features, we show how this polynomial representation can provide a better approximation in terms of the computational cost involved. This makes our ""Taylor features"" especially attractive for use on very large data sets, in conjunction with online or stochastic training. ",Explicit Approximations of the Gaussian Kernel " Today, available methods that assess AI systems are focused on using empirical techniques to measure the performance of algorithms in some specific tasks (e.g., playing chess, solving mazes or land a helicopter). However, these methods are not appropriate if we want to evaluate the general intelligence of AI and, even less, if we compare it with human intelligence. The ANYNT project has designed a new method of evaluation that tries to assess AI systems using well known computational notions and problems which are as general as possible. This new method serves to assess general intelligence (which allows us to learn how to solve any new kind of problem we face) and not only to evaluate performance on a set of specific tasks. This method not only focuses on measuring the intelligence of algorithms, but also to assess any intelligent system (human beings, animals, AI, aliens?,...), and letting us to place their results on the same scale and, therefore, to be able to compare them. This new approach will allow us (in the future) to evaluate and compare any kind of intelligent system known or even to build/find, be it artificial or biological. This master thesis aims at ensuring that this new method provides consistent results when evaluating AI algorithms, this is done through the design and implementation of prototypes of universal intelligence tests and their application to different intelligent systems (AI algorithms and humans beings). From the study we analyze whether the results obtained by two different intelligent systems are properly located on the same scale and we propose changes and refinements to these prototypes in order to, in the future, being able to achieve a truly universal intelligence test. ","Analysis of first prototype universal intelligence tests: evaluating and comparing AI algorithms and humans" " We provide an overview of the organization and results of the deterministic part of the 4th International Planning Competition, i.e., of the part concerned with evaluating systems doing deterministic planning. IPC-4 attracted even more competing systems than its already large predecessors, and the competition event was revised in several important respects. After giving an introduction to the IPC, we briefly explain the main differences between the deterministic part of IPC-4 and its predecessors. We then introduce formally the language used, called PDDL2.2 that extends PDDL2.1 by derived predicates and timed initial literals. We list the competing systems and overview the results of the competition. The entire set of data is far too large to be presented in full. We provide a detailed summary; the complete data is available in an online appendix. We explain how we awarded the competition prizes. ",The Deterministic Part of IPC-4: An Overview " PDDL2.1 was designed to push the envelope of what planning algorithms can do, and it has succeeded. It adds two important features: durative actions,which take time (and may have continuous effects); and objective functions for measuring the quality of plans. The concept of durative actions is flawed; and the treatment of their semantics reveals too strong an attachment to the way many contemporary planners work. Future PDDL innovators should focus on producing a clean semantics for additions to the language, and let planner implementers worry about coupling their algorithms to problems expressed in the latest version of the language. ",PDDL2.1 - The Art of the Possible? Commentary on Fox and Long " The addition of durative actions to PDDL2.1 sparked some controversy. Fox and Long argued that actions should be considered as instantaneous, but can start and stop processes. Ultimately, a limited notion of durative actions was incorporated into the language. I argue that this notion is still impoverished, and that the underlying philosophical position of regarding durative actions as being a shorthand for a start action, process, and stop action ignores the realities of modelling and execution for complex systems. ",The Case for Durative Actions: A Commentary on PDDL2.1 " We present a partial-order, conformant, probabilistic planner, Probapop which competed in the blind track of the Probabilistic Planning Competition in IPC-4. We explain how we adapt distance based heuristics for use with probabilistic domains. Probapop also incorporates heuristics based on probability of success. We explain the successes and difficulties encountered during the design and implementation of Probapop. ",Engineering a Conformant Probabilistic Planner " Between 1998 and 2004, the planning community has seen vast progress in terms of the sizes of benchmark examples that domain-independent planners can tackle successfully. The key technique behind this progress is the use of heuristic functions based on relaxing the planning task at hand, where the relaxation is to assume that all delete lists are empty. The unprecedented success of such methods, in many commonly used benchmark examples, calls for an understanding of what classes of domains these methods are well suited for. In the investigation at hand, we derive a formal background to such an understanding. We perform a case study covering a range of 30 commonly used STRIPS and ADL benchmark domains, including all examples used in the first four international planning competitions. We *prove* connections between domain structure and local search topology -- heuristic cost surface properties -- under an idealized version of the heuristic functions used in modern planners. The idealized heuristic function is called h^+, and differs from the practically used functions in that it returns the length of an *optimal* relaxed plan, which is NP-hard to compute. We identify several key characteristics of the topology under h^+, concerning the existence/non-existence of unrecognized dead ends, as well as the existence/non-existence of constant upper bounds on the difficulty of escaping local minima and benches. These distinctions divide the (set of all) planning domains into a taxonomy of classes of varying h^+ topology. As it turns out, many of the 30 investigated domains lie in classes with a relatively easy topology. Most particularly, 12 of the domains lie in classes where FFs search algorithm, provided with h^+, is a polynomial solving mechanism. We also present results relating h^+ to its approximation as implemented in FF. The behavior regarding dead ends is provably the same. We summarize the results of an empirical investigation showing that, in many domains, the topological qualities of h^+ are largely inherited by the approximation. The overall investigation gives a rare example of a successful analysis of the connections between typical-case problem structure, and search performance. The theoretical investigation also gives hints on how the topological phenomena might be automatically recognizable by domain analysis techniques. We outline some preliminary steps we made into that direction. ","Where 'Ignoring Delete Lists' Works: Local Search Topology in Planning Benchmarks" " A non-binary Constraint Satisfaction Problem (CSP) can be solved directly using extended versions of binary techniques. Alternatively, the non-binary problem can be translated into an equivalent binary one. In this case, it is generally accepted that the translated problem can be solved by applying well-established techniques for binary CSPs. In this paper we evaluate the applicability of the latter approach. We demonstrate that the use of standard techniques for binary CSPs in the encodings of non-binary problems is problematic and results in models that are very rarely competitive with the non-binary representation. To overcome this, we propose specialized arc consistency and search algorithms for binary encodings, and we evaluate them theoretically and empirically. We consider three binary representations; the hidden variable encoding, the dual encoding, and the double encoding. Theoretical and empirical results show that, for certain classes of non-binary constraints, binary encodings are a competitive option, and in many cases, a better one than the non-binary representation. ","Binary Encodings of Non-binary Constraint Satisfaction Problems: Algorithms and Experimental Results" " In a peer-to-peer inference system, each peer can reason locally but can also solicit some of its acquaintances, which are peers sharing part of its vocabulary. In this paper, we consider peer-to-peer inference systems in which the local theory of each peer is a set of propositional clauses defined upon a local vocabulary. An important characteristic of peer-to-peer inference systems is that the global theory (the union of all peer theories) is not known (as opposed to partition-based reasoning systems). The main contribution of this paper is to provide the first consequence finding algorithm in a peer-to-peer setting: DeCA. It is anytime and computes consequences gradually from the solicited peer to peers that are more and more distant. We exhibit a sufficient condition on the acquaintance graph of the peer-to-peer inference system for guaranteeing the completeness of this algorithm. Another important contribution is to apply this general distributed reasoning setting to the setting of the Semantic Web through the Somewhere semantic peer-to-peer data management system. The last contribution of this paper is to provide an experimental analysis of the scalability of the peer-to-peer infrastructure that we propose, on large networks of 1000 peers. ","Distributed Reasoning in a Peer-to-Peer Setting: Application to the Semantic Web" " In this paper, we introduce DLS-MC, a new stochastic local search algorithm for the maximum clique problem. DLS-MC alternates between phases of iterative improvement, during which suitable vertices are added to the current clique, and plateau search, during which vertices of the current clique are swapped with vertices not contained in the current clique. The selection of vertices is solely based on vertex penalties that are dynamically adjusted during the search, and a perturbation mechanism is used to overcome search stagnation. The behaviour of DLS-MC is controlled by a single parameter, penalty delay, which controls the frequency at which vertex penalties are reduced. We show empirically that DLS-MC achieves substantial performance improvements over state-of-the-art algorithms for the maximum clique problem over a large range of the commonly used DIMACS benchmark instances. ",Dynamic Local Search for the Maximum Clique Problem " Open distributed multi-agent systems are gaining interest in the academic community and in industry. In such open settings, agents are often coordinated using standardized agent conversation protocols. The representation of such protocols (for analysis, validation, monitoring, etc) is an important aspect of multi-agent applications. Recently, Petri nets have been shown to be an interesting approach to such representation, and radically different approaches using Petri nets have been proposed. However, their relative strengths and weaknesses have not been examined. Moreover, their scalability and suitability for different tasks have not been addressed. This paper addresses both these challenges. First, we analyze existing Petri net representations in terms of their scalability and appropriateness for overhearing, an important task in monitoring open multi-agent systems. Then, building on the insights gained, we introduce a novel representation using Colored Petri nets that explicitly represent legal joint conversation states and messages. This representation approach offers significant improvements in scalability and is particularly suitable for overhearing. Furthermore, we show that this new representation offers a comprehensive coverage of all conversation features of FIPA conversation standards. We also present a procedure for transforming AUML conversation protocol diagrams (a standard human-readable representation), to our Colored Petri net representation. ",Representing Conversations for Scalable Overhearing " The hm admissible heuristics for (sequential and temporal) regression planning are defined by a parameterized relaxation of the optimal cost function in the regression search space, where the parameter m offers a trade-off between the accuracy and computational cost of theheuristic. Existing methods for computing the hm heuristic require time exponential in m, limiting them to small values (m andlt= 2). The hm heuristic can also be viewed as the optimal cost function in a relaxation of the search space: this paper presents relaxed search, a method for computing this function partially by searching in the relaxed space. The relaxed search method, because it computes hm only partially, is computationally cheaper and therefore usable for higher values of m. The (complete) hm heuristic is combined with partial hm heuristics, for m = 3,..., computed by relaxed search, resulting in a more accurate heuristic. This use of the relaxed search method to improve on the hm heuristic is evaluated by comparing two optimal temporal planners: TP4, which does not use it, and HSP*a, which uses it but is otherwise identical to TP4. The comparison is made on the domains used in the 2004 International Planning Competition, in which both planners participated. Relaxed search is found to be cost effective in some of these domains, but not all. Analysis reveals a characterization of the domains in which relaxed search can be expected to be cost effective, in terms of two measures on the original and relaxed search spaces. In the domains where relaxed search is cost effective, expanding small states is computationally cheaper than expanding large states and small states tend to have small successor states. ","Improving Heuristics Through Relaxed Search - An Analysis of TP4 and HSP*a in the 2004 Planning Competition" " Recently, a variety of constraint programming and Boolean satisfiability approaches to scheduling problems have been introduced. They have in common the use of relatively simple propagation mechanisms and an adaptive way to focus on the most constrained part of the problem. In some cases, these methods compare favorably to more classical constraint programming methods relying on propagation algorithms for global unary or cumulative resource constraints and dedicated search heuristics. In particular, we described an approach that combines restarting, with a generic adaptive heuristic and solution guided branching on a simple model based on a decomposition of disjunctive constraints. In this paper, we introduce an adaptation of this technique for an important subclass of job shop scheduling problems (JSPs), where the objective function involves minimization of earliness/tardiness costs. We further show that our technique can be improved by adding domain specific information for one variant of the JSP (involving time lag constraints). In particular we introduce a dedicated greedy heuristic, and an improved model for the case where the maximal time lag is 0 (also referred to as no-wait JSPs). ",Models and Strategies for Variants of the Job Shop Scheduling Problem " The Universal Intelligence Measure is a recently proposed formal definition of intelligence. It is mathematically specified, extremely general, and captures the essence of many informal definitions of intelligence. It is based on Hutter's Universal Artificial Intelligence theory, an extension of Ray Solomonoff's pioneering work on universal induction. Since the Universal Intelligence Measure is only asymptotically computable, building a practical intelligence test from it is not straightforward. This paper studies the practical issues involved in developing a real-world UIM-based performance metric. Based on our investigation, we develop a prototype implementation which we use to evaluate a number of different artificial agents. ",An Approximation of the Universal Intelligence Measure " Multiple sequence alignment (MSA) is a ubiquitous problem in computational biology. Although it is NP-hard to find an optimal solution for an arbitrary number of sequences, due to the importance of this problem researchers are trying to push the limits of exact algorithms further. Since MSA can be cast as a classical path finding problem, it is attracting a growing number of AI researchers interested in heuristic search algorithms as a challenge with actual practical relevance. In this paper, we first review two previous, complementary lines of research. Based on Hirschbergs algorithm, Dynamic Programming needs O(kN^(k-1)) space to store both the search frontier and the nodes needed to reconstruct the solution path, for k sequences of length N. Best first search, on the other hand, has the advantage of bounding the search space that has to be explored using a heuristic. However, it is necessary to maintain all explored nodes up to the final solution in order to prevent the search from re-expanding them at higher cost. Earlier approaches to reduce the Closed list are either incompatible with pruning methods for the Open list, or must retain at least the boundary of the Closed list. In this article, we present an algorithm that attempts at combining the respective advantages; like A* it uses a heuristic for pruning the search space, but reduces both the maximum Open and Closed size to O(kN^(k-1)), as in Dynamic Programming. The underlying idea is to conduct a series of searches with successively increasing upper bounds, but using the DP ordering as the key for the Open priority queue. With a suitable choice of thresholds, in practice, a running time below four times that of A* can be expected. In our experiments we show that our algorithm outperforms one of the currently most successful algorithms for optimal multiple sequence alignments, Partial Expansion A*, both in time and memory. Moreover, we apply a refined heuristic based on optimal alignments not only of pairs of sequences, but of larger subsets. This idea is not new; however, to make it practically relevant we show that it is equally important to bound the heuristic computation appropriately, or the overhead can obliterate any possible gain. Furthermore, we discuss a number of improvements in time and space efficiency with regard to practical implementations. Our algorithm, used in conjunction with higher-dimensional heuristics, is able to calculate for the first time the optimal alignment for almost all of the problems in Reference 1 of the benchmark database BAliBASE. ",An Improved Search Algorithm for Optimal Multiple-Sequence Alignment " This article develops Probabilistic Hybrid Action Models (PHAMs), a realistic causal model for predicting the behavior generated by modern percept-driven robot plans. PHAMs represent aspects of robot behavior that cannot be represented by most action models used in AI planning: the temporal structure of continuous control processes, their non-deterministic effects, several modes of their interferences, and the achievement of triggering conditions in closed-loop robot plans. The main contributions of this article are: (1) PHAMs, a model of concurrent percept-driven behavior, its formalization, and proofs that the model generates probably, qualitatively accurate predictions; and (2) a resource-efficient inference method for PHAMs based on sampling projections from probabilistic action models and state descriptions. We show how PHAMs can be applied to planning the course of action of an autonomous robot office courier based on analytical and experimental results. ","Probabilistic Hybrid Action Models for Predicting Concurrent Percept-driven Robot Behavior" " We present a novel framework for integrating prior knowledge into discriminative classifiers. Our framework allows discriminative classifiers such as Support Vector Machines (SVMs) to utilize prior knowledge specified in the generative setting. The dual objective of fitting the data and respecting prior knowledge is formulated as a bilevel program, which is solved (approximately) via iterative application of second-order cone programming. To test our approach, we consider the problem of using WordNet (a semantic database of English language) to improve low-sample classification accuracy of newsgroup categorization. WordNet is viewed as an approximate, but readily available source of background knowledge, and our framework is capable of utilizing it in a flexible way. ",Generative Prior Knowledge for Discriminative Classification " Fast Downward is a classical planning system based on heuristic search. It can deal with general deterministic planning problems encoded in the propositional fragment of PDDL2.2, including advanced features like ADL conditions and effects and derived predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a progression planner, searching the space of world states of a planning task in the forward direction. However, unlike other PDDL planning systems, Fast Downward does not use the propositional PDDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multi-valued planning tasks, which makes many of the implicit constraints of a propositional planning task explicit. Exploiting this alternative representation, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic function, called the causal graph heuristic, which is very different from traditional HSP-like heuristics based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downwards approach to solving multi-valued planning tasks. We extend our earlier discussion of the causal graph heuristic to tasks involving axioms and conditional effects and present some novel techniques for search control that are used within Fast Downwards best-first search algorithm: preferred operators transfer the idea of helpful actions from local search to global best-first search, deferred evaluation of heuristic functions mitigates the negative effect of large branching factors on search performance, and multi-heuristic best-first search combines several heuristic evaluation functions within a single search algorithm in an orthogonal way. We also describe efficient data structures for fast state expansion (successor generators and axiom evaluators) and present a new non-heuristic search algorithm called focused iterative-broadening search, which utilizes the information encoded in causal graphs in a novel way. Fast Downward has proven remarkably successful: It won the ""classical (i.e., propositional, non-optimising) track of the 4th International Planning Competition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions and provide some insights about the usefulness of the new search enhancements. ",The Fast Downward Planning System " Distributed Constraint Satisfaction (DCSP) has long been considered an important problem in multi-agent systems research. This is because many real-world problems can be represented as constraint satisfaction and these problems often present themselves in a distributed form. In this article, we present a new complete, distributed algorithm called Asynchronous Partial Overlay (APO) for solving DCSPs that is based on a cooperative mediation process. The primary ideas behind this algorithm are that agents, when acting as a mediator, centralize small, relevant portions of the DCSP, that these centralized subproblems overlap, and that agents increase the size of their subproblems along critical paths within the DCSP as the problem solving unfolds. We present empirical evidence that shows that APO outperforms other known, complete DCSP techniques. ","Asynchronous Partial Overlay: A New Algorithm for Solving Distributed Constraint Satisfaction Problems" " As partial justification of their framework for iterated belief revision Darwiche and Pearl convincingly argued against Boutiliers natural revision and provided a prototypical revision operator that fits into their scheme. We show that the Darwiche-Pearl arguments lead naturally to the acceptance of a smaller class of operators which we refer to as admissible. Admissible revision ensures that the penultimate input is not ignored completely, thereby eliminating natural revision, but includes the Darwiche-Pearl operator, Nayaks lexicographic revision operator, and a newly introduced operator called restrained revision. We demonstrate that restrained revision is the most conservative of admissible revision operators, effecting as few changes as possible, while lexicographic revision is the least conservative, and point out that restrained revision can also be viewed as a composite operator, consisting of natural revision preceded by an application of a ""backwards revision"" operator previously studied by Papini. Finally, we propose the establishment of a principled approach for choosing an appropriate revision operator in different contexts and discuss future work. ",Admissible and Restrained Revision " In recent years, CP-nets have emerged as a useful tool for supporting preference elicitation, reasoning, and representation. CP-nets capture and support reasoning with qualitative conditional preference statements, statements that are relatively natural for users to express. In this paper, we extend the CP-nets formalism to handle another class of very natural qualitative statements one often uses in expressing preferences in daily life - statements of relative importance of attributes. The resulting formalism, TCP-nets, maintains the spirit of CP-nets, in that it remains focused on using only simple and natural preference statements, uses the ceteris paribus semantics, and utilizes a graphical representation of this information to reason about its consistency and to perform, possibly constrained, optimization using it. The extra expressiveness it provides allows us to better model tradeoffs users would like to make, more faithfully representing their preferences. ",On Graphical Modeling of Preference and Importance " Linear Temporal Logic (LTL) is widely used for defining conditions on the execution paths of dynamic systems. In the case of dynamic systems that allow for nondeterministic evolutions, one has to specify, along with an LTL formula f, which are the paths that are required to satisfy the formula. Two extreme cases are the universal interpretation A.f, which requires that the formula be satisfied for all execution paths, and the existential interpretation E.f, which requires that the formula be satisfied for some execution path. When LTL is applied to the definition of goals in planning problems on nondeterministic domains, these two extreme cases are too restrictive. It is often impossible to develop plans that achieve the goal in all the nondeterministic evolutions of a system, and it is too weak to require that the goal is satisfied by some execution. In this paper we explore alternative interpretations of an LTL formula that are between these extreme cases. We define a new language that permits an arbitrary combination of the A and E quantifiers, thus allowing, for instance, to require that each finite execution can be extended to an execution satisfying an LTL formula (AE.f), or that there is some finite execution whose extensions all satisfy an LTL formula (EA.f). We show that only eight of these combinations of path quantifiers are relevant, corresponding to an alternation of the quantifiers of length one (A and E), two (AE and EA), three (AEA and EAE), and infinity ((AE)* and (EA)*). We also present a planning algorithm for the new language that is based on an automata-theoretic approach, and study its complexity. ","The Planning Spectrum - One, Two, Three, Infinity" " A delta-model is a satisfying assignment of a Boolean formula for which any small alteration, such as a single bit flip, can be repaired by flips to some small number of other bits, yielding a new satisfying assignment. These satisfying assignments represent robust solutions to optimization problems (e.g., scheduling) where it is possible to recover from unforeseen events (e.g., a resource becoming unavailable). The concept of delta-models was introduced by Ginsberg, Parkes and Roy (AAAI 1998), where it was proved that finding delta-models for general Boolean formulas is NP-complete. In this paper, we extend that result by studying the complexity of finding delta-models for classes of Boolean formulas which are known to have polynomial time satisfiability solvers. In particular, we examine 2-SAT, Horn-SAT, Affine-SAT, dual-Horn-SAT, 0-valid and 1-valid SAT. We see a wide variation in the complexity of finding delta-models, e.g., while 2-SAT and Affine-SAT have polynomial time tests for delta-models, testing whether a Horn-SAT formula has one is NP-complete. ",Fault Tolerant Boolean Satisfiability " Multimodal conversational interfaces provide a natural means for users to communicate with computer systems through multiple modalities such as speech and gesture. To build effective multimodal interfaces, automated interpretation of user multimodal inputs is important. Inspired by the previous investigation on cognitive status in multimodal human machine interaction, we have developed a greedy algorithm for interpreting user referring expressions (i.e., multimodal reference resolution). This algorithm incorporates the cognitive principles of Conversational Implicature and Givenness Hierarchy and applies constraints from various sources (e.g., temporal, semantic, and contextual) to resolve references. Our empirical results have shown the advantage of this algorithm in efficiently resolving a variety of user references. Because of its simplicity and generality, this approach has the potential to improve the robustness of multimodal input interpretation. ",Cognitive Principles in Robust Multimodal Interpretation " This paper presents a new framework for anytime heuristic search where the task is to achieve as many goals as possible within the allocated resources. We show the inadequacy of traditional distance-estimation heuristics for tasks of this type and present alternative heuristics that are more appropriate for multiple-goal search. In particular, we introduce the marginal-utility heuristic, which estimates the cost and the benefit of exploring a subtree below a search node. We developed two methods for online learning of the marginal-utility heuristic. One is based on local similarity of the partial marginal utility of sibling nodes, and the other generalizes marginal-utility over the state feature space. We apply our adaptive and non-adaptive multiple-goal search algorithms to several problems, including focused crawling, and show their superiority over existing methods. ",Multiple-Goal Heuristic Search " We present a heuristic search algorithm for solving first-order Markov Decision Processes (FOMDPs). Our approach combines first-order state abstraction that avoids evaluating states individually, and heuristic search that avoids evaluating all states. Firstly, in contrast to existing systems, which start with propositionalizing the FOMDP and then perform state abstraction on its propositionalized version we apply state abstraction directly on the FOMDP avoiding propositionalization. This kind of abstraction is referred to as first-order state abstraction. Secondly, guided by an admissible heuristic, the search is restricted to those states that are reachable from the initial state. We demonstrate the usefulness of the above techniques for solving FOMDPs with a system, referred to as FluCaP (formerly, FCPlanner), that entered the probabilistic track of the 2004 International Planning Competition (IPC2004) and demonstrated an advantage over other planners on the problems represented in first-order terms. ",FluCaP: A Heuristic Search Planner for First-Order MDPs " Suppose we want to build a system that answers a natural language question by representing its semantics as a logical form and computing the answer given a structured database of facts. The core part of such a system is the semantic parser that maps questions to logical forms. Semantic parsers are typically trained from examples of questions annotated with their target logical forms, but this type of annotation is expensive. Our goal is to learn a semantic parser from question-answer pairs instead, where the logical form is modeled as a latent variable. Motivated by this challenging learning problem, we develop a new semantic formalism, dependency-based compositional semantics (DCS), which has favorable linguistic, statistical, and computational properties. We define a log-linear distribution over DCS logical forms and estimate the parameters using a simple procedure that alternates between beam search and numerical optimization. On two standard semantic parsing benchmarks, our system outperforms all existing state-of-the-art systems, despite using no annotated logical forms. ",Learning Dependency-Based Compositional Semantics " It was recently proved that a sound and complete qualitative simulator does not exist, that is, as long as the input-output vocabulary of the state-of-the-art QSIM algorithm is used, there will always be input models which cause any simulator with a coverage guarantee to make spurious predictions in its output. In this paper, we examine whether a meaningfully expressive restriction of this vocabulary is possible so that one can build a simulator with both the soundness and completeness properties. We prove several negative results: All sound qualitative simulators, employing subsets of the QSIM representation which retain the operating region transition feature, and support at least the addition and constancy constraints, are shown to be inherently incomplete. Even when the simulations are restricted to run in a single operating region, a constraint vocabulary containing just the addition, constancy, derivative, and multiplication relations makes the construction of sound and complete qualitative simulators impossible. ",Causes of Ineradicable Spurious Predictions in Qualitative Simulation " We study properties of programs with monotone and convex constraints. We extend to these formalisms concepts and results from normal logic programming. They include the notions of strong and uniform equivalence with their characterizations, tight programs and Fages Lemma, program completion and loop formulas. Our results provide an abstract account of properties of some recent extensions of logic programming with aggregates, especially the formalism of lparse programs. They imply a method to compute stable models of lparse programs by means of off-the-shelf solvers of pseudo-boolean constraints, which is often much faster than the smodels system. ","Properties and Applications of Programs with Monotone and Convex Constraints" " We characterize the search landscape of random instances of the job shop scheduling problem (JSP). Specifically, we investigate how the expected values of (1) backbone size, (2) distance between near-optimal schedules, and (3) makespan of random schedules vary as a function of the job to machine ratio (N/M). For the limiting cases N/M approaches 0 and N/M approaches infinity we provide analytical results, while for intermediate values of N/M we perform experiments. We prove that as N/M approaches 0, backbone size approaches 100%, while as N/M approaches infinity the backbone vanishes. In the process we show that as N/M approaches 0 (resp. N/M approaches infinity), simple priority rules almost surely generate an optimal schedule, providing theoretical evidence of an ""easy-hard-easy"" pattern of typical-case instance difficulty in job shop scheduling. We also draw connections between our theoretical results and the ""big valley"" picture of JSP landscapes. ","How the Landscape of Random Job Shop Scheduling Instances Depends on the Ratio of Jobs to Machines" " We consider interactive tools that help users search for their most preferred item in a large collection of options. In particular, we examine example-critiquing, a technique for enabling users to incrementally construct preference models by critiquing example options that are presented to them. We present novel techniques for improving the example-critiquing technology by adding suggestions to its displayed options. Such suggestions are calculated based on an analysis of users current preference model and their potential hidden preferences. We evaluate the performance of our model-based suggestion techniques with both synthetic and real users. Results show that such suggestions are highly attractive to users and can stimulate them to express more preferences to improve the chance of identifying their most preferred item by up to 78%. ",Preference-based Search using Example-Critiquing with Suggestions " The Partially Observable Markov Decision Process has long been recognized as a rich framework for real-world planning and control problems, especially in robotics. However exact solutions in this framework are typically computationally intractable for all but the smallest problems. A well-known technique for speeding up POMDP solving involves performing value backups at specific belief points, rather than over the entire belief simplex. The efficiency of this approach, however, depends greatly on the selection of points. This paper presents a set of novel techniques for selecting informative belief points which work well in practice. The point selection procedure is combined with point-based value backups to form an effective anytime POMDP algorithm called Point-Based Value Iteration (PBVI). The first aim of this paper is to introduce this algorithm and present a theoretical analysis justifying the choice of belief selection technique. The second aim of this paper is to provide a thorough empirical comparison between PBVI and other state-of-the-art POMDP methods, in particular the Perseus algorithm, in an effort to highlight their similarities and differences. Evaluation is performed using both standard POMDP domains and realistic robotic tasks. ",Anytime Point-Based Approximations for Large POMDPs " Efficient representations and solutions for large decision problems with continuous and discrete variables are among the most important challenges faced by the designers of automated decision support systems. In this paper, we describe a novel hybrid factored Markov decision process (MDP) model that allows for a compact representation of these problems, and a new hybrid approximate linear programming (HALP) framework that permits their efficient solutions. The central idea of HALP is to approximate the optimal value function by a linear combination of basis functions and optimize its weights by linear programming. We analyze both theoretical and computational aspects of this approach, and demonstrate its scale-up potential on several hybrid optimization problems. ",Solving Factored MDPs with Hybrid State and Action Variables " This paper introduces and analyzes a battery of inference models for the problem of semantic role labeling: one based on constraint satisfaction, and several strategies that model the inference as a meta-learning problem using discriminative classifiers. These classifiers are developed with a rich set of novel features that encode proposition and sentence-level information. To our knowledge, this is the first work that: (a) performs a thorough analysis of learning-based inference models for semantic role labeling, and (b) compares several inference strategies in this context. We evaluate the proposed inference strategies in the framework of the CoNLL-2005 shared task using only automatically-generated syntactic information. The extensive experimental evaluation and analysis indicates that all the proposed inference strategies are successful -they all outperform the current best results reported in the CoNLL-2005 evaluation exercise- but each of the proposed approaches has its advantages and disadvantages. Several important traits of a state-of-the-art SRL combination strategy emerge from this analysis: (i) individual models should be combined at the granularity of candidate arguments rather than at the granularity of complete solutions; (ii) the best combination strategy uses an inference model based in learning; and (iii) the learning-based inference benefits from max-margin classifiers and global feedback. ",Combination Strategies for Semantic Role Labeling " In contrast to the existing approaches to bisimulation for fuzzy systems, we introduce a behavioral distance to measure the behavioral similarity of states in a nondeterministic fuzzy-transition system. This behavioral distance is defined as the greatest fixed point of a suitable monotonic function and provides a quantitative analogue of bisimilarity. The behavioral distance has the important property that two states are at zero distance if and only if they are bisimilar. Moreover, for any given threshold, we find that states with behavioral distances bounded by the threshold are equivalent. In addition, we show that two system combinators---parallel composition and product---are non-expansive with respect to our behavioral distance, which makes compositional verification possible. ",A Behavioral Distance for Fuzzy-Transition Systems " In a field of research about general reasoning mechanisms, it is essential to have appropriate benchmarks. Ideally, the benchmarks should reflect possible applications of the developed technology. In AI Planning, researchers more and more tend to draw their testing examples from the benchmark collections used in the International Planning Competition (IPC). In the organization of (the deterministic part of) the fourth IPC, IPC-4, the authors therefore invested significant effort to create a useful set of benchmarks. They come from five different (potential) real-world applications of planning: airport ground traffic control, oil derivative transportation in pipeline networks, model-checking safety properties, power supply restoration, and UMTS call setup. Adapting and preparing such an application for use as a benchmark in the IPC involves, at the time, inevitable (often drastic) simplifications, as well as careful choice between, and engineering of, domain encodings. For the first time in the IPC, we used compilations to formulate complex domain features in simple languages such as STRIPS, rather than just dropping the more interesting problem constraints in the simpler language subsets. The article explains and discusses the five application domains and their adaptation to form the PDDL test suites used in IPC-4. We summarize known theoretical results on structural properties of the domains, regarding their computational complexity and provable properties of their topology under the h+ function (an idealized version of the relaxed plan heuristic). We present new (empirical) results illuminating properties such as the quality of the most wide-spread heuristic functions (planning graph, serial planning graph, and relaxed plan), the growth of propositional representations over instance size, and the number of actions available to achieve each fact; we discuss these data in conjunction with the best results achieved by the different kinds of planners participating in IPC-4. ","Engineering Benchmarks for Planning: the Domains Used in the Deterministic Part of IPC-4" " In this paper we present pddl+, a planning domain description language for modelling mixed discrete-continuous planning domains. We describe the syntax and modelling style of pddl+, showing that the language makes convenient the modelling of complex time-dependent effects. We provide a formal semantics for pddl+ by mapping planning instances into constructs of hybrid automata. Using the syntax of HAs as our semantic model we construct a semantic mapping to labelled transition systems to complete the formal interpretation of pddl+ planning instances. An advantage of building a mapping from pddl+ to HA theory is that it forms a bridge between the Planning and Real Time Systems research communities. One consequence is that we can expect to make use of some of the theoretical properties of HAs. For example, for a restricted class of HAs the Reachability problem (which is equivalent to Plan Existence) is decidable. pddl+ provides an alternative to the continuous durative action model of pddl2.1, adding a more flexible and robust model of time-dependent behaviour. ",Modelling Mixed Discrete-Continuous Domains for Planning " In this paper, we show that there is a close relation between consistency in a constraint network and set intersection. A proof schema is provided as a generic way to obtain consistency properties from properties on set intersection. This approach not only simplifies the understanding of and unifies many existing consistency results, but also directs the study of consistency to that of set intersection properties in many situations, as demonstrated by the results on the convexity and tightness of constraints in this paper. Specifically, we identify a new class of tree convex constraints where local consistency ensures global consistency. This generalizes row convex constraints. Various consistency results are also obtained on constraint networks where only some, in contrast to all in the existing work,constraints are tight. ",Set Intersection and Consistency in Constraint Networks " In this paper, we study the possibility of designing non-trivial random CSP models by exploiting the intrinsic connection between structures and typical-case hardness. We show that constraint consistency, a notion that has been developed to improve the efficiency of CSP algorithms, is in fact the key to the design of random CSP models that have interesting phase transition behavior and guaranteed exponential resolution complexity without putting much restriction on the parameter of constraint tightness or the domain size of the problem. We propose a very flexible framework for constructing problem instances withinteresting behavior and develop a variety of concrete methods to construct specific random CSP models that enforce different levels of constraint consistency. A series of experimental studies with interesting observations are carried out to illustrate the effectiveness of introducing structural elements in random instances, to verify the robustness of our proposal, and to investigate features of some specific models based on our framework that are highly related to the behavior of backtracking search algorithms. ",Consistency and Random Constraint Satisfaction Models " In this paper, we present two alternative approaches to defining answer sets for logic programs with arbitrary types of abstract constraint atoms (c-atoms). These approaches generalize the fixpoint-based and the level mapping based answer set semantics of normal logic programs to the case of logic programs with arbitrary types of c-atoms. The results are four different answer set definitions which are equivalent when applied to normal logic programs. The standard fixpoint-based semantics of logic programs is generalized in two directions, called answer set by reduct and answer set by complement. These definitions, which differ from each other in the treatment of negation-as-failure (naf) atoms, make use of an immediate consequence operator to perform answer set checking, whose definition relies on the notion of conditional satisfaction of c-atoms w.r.t. a pair of interpretations. The other two definitions, called strongly and weakly well-supported models, are generalizations of the notion of well-supported models of normal logic programs to the case of programs with c-atoms. As for the case of fixpoint-based semantics, the difference between these two definitions is rooted in the treatment of naf atoms. We prove that answer sets by reduct (resp. by complement) are equivalent to weakly (resp. strongly) well-supported models of a program, thus generalizing the theorem on the correspondence between stable models and well-supported models of a normal logic program to the class of programs with c-atoms. We show that the newly defined semantics coincide with previously introduced semantics for logic programs with monotone c-atoms, and they extend the original answer set semantics of normal logic programs. We also study some properties of answer sets of programs with c-atoms, and relate our definitions to several semantics for logic programs with aggregates presented in the literature. ",Answer Sets for Logic Programs with Arbitrary Abstract Constraint Atoms " Many combinatorial optimization problems such as the bin packing and multiple knapsack problems involve assigning a set of discrete objects to multiple containers. These problems can be used to model task and resource allocation problems in multi-agent systems and distributed systms, and can also be found as subproblems of scheduling problems. We propose bin completion, a branch-and-bound strategy for one-dimensional, multicontainer packing problems. Bin completion combines a bin-oriented search space with a powerful dominance criterion that enables us to prune much of the space. The performance of the basic bin completion framework can be enhanced by using a number of extensions, including nogood-based pruning techniques that allow further exploitation of the dominance criterion. Bin completion is applied to four problems: multiple knapsack, bin covering, min-cost covering, and bin packing. We show that our bin completion algorithms yield new, state-of-the-art results for the multiple knapsack, bin covering, and min-cost covering problems, outperforming previous algorithms by several orders of magnitude with respect to runtime on some classes of hard, random problem instances. For the bin packing problem, we demonstrate significant improvements compared to most previous results, but show that bin completion is not competitive with current state-of-the-art cutting-stock based approaches. ","Bin Completion Algorithms for Multicontainer Packing, Knapsack, and Covering Problems" " In real-life temporal scenarios, uncertainty and preferences are often essential and coexisting aspects. We present a formalism where quantitative temporal constraints with both preferences and uncertainty can be defined. We show how three classical notions of controllability (that is, strong, weak, and dynamic), which have been developed for uncertain temporal problems, can be generalized to handle preferences as well. After defining this general framework, we focus on problems where preferences follow the fuzzy approach, and with properties that assure tractability. For such problems, we propose algorithms to check the presence of the controllability properties. In particular, we show that in such a setting dealing simultaneously with preferences and uncertainty does not increase the complexity of controllability testing. We also develop a dynamic execution algorithm, of polynomial complexity, that produces temporal plans under uncertainty that are optimal with respect to fuzzy preferences. ","Uncertainty in Soft Temporal Constraint Problems:A General Framework and Controllability Algorithms for the Fuzzy Case" " In the recent years several research efforts have focused on the concept of time granularity and its applications. A first stream of research investigated the mathematical models behind the notion of granularity and the algorithms to manage temporal data based on those models. A second stream of research investigated symbolic formalisms providing a set of algebraic operators to define granularities in a compact and compositional way. However, only very limited manipulation algorithms have been proposed to operate directly on the algebraic representation making it unsuitable to use the symbolic formalisms in applications that need manipulation of granularities. This paper aims at filling the gap between the results from these two streams of research, by providing an efficient conversion from the algebraic representation to the equivalent low-level representation based on the mathematical models. In addition, the conversion returns a minimal representation in terms of period length. Our results have a major practical impact: users can more easily define arbitrary granularities in terms of algebraic operators, and then access granularity reasoning and other services operating efficiently on the equivalent, minimal low-level representation. As an example, we illustrate the application to temporal constraint reasoning with multiple granularities. From a technical point of view, we propose an hybrid algorithm that interleaves the conversion of calendar subexpressions into periodical sets with the minimization of the period length. The algorithm returns set-based granularity representations having minimal period length, which is the most relevant parameter for the performance of the considered reasoning services. Extensive experimental work supports the techniques used in the algorithm, and shows the efficiency and effectiveness of the algorithm. ","Supporting Temporal Reasoning by Mapping Calendar Expressions to Minimal Periodic Sets" " We consider the problem of computing a lightest derivation of a global structure using a set of weighted rules. A large variety of inference problems in AI can be formulated in this framework. We generalize A* search and heuristics derived from abstractions to a broad class of lightest derivation problems. We also describe a new algorithm that searches for lightest derivations using a hierarchy of abstractions. Our generalization of A* gives a new algorithm for searching AND/OR graphs in a bottom-up fashion. We discuss how the algorithms described here provide a general architecture for addressing the pipeline problem --- the problem of passing information back and forth between various stages of processing in a perceptual system. We consider examples in computer vision and natural language processing. We apply the hierarchical search algorithm to the problem of estimating the boundaries of convex objects in grayscale images and compare it to other search methods. A second set of experiments demonstrate the use of a new compositional model for finding salient curves in images. ",The Generalized A* Architecture " In this paper, we construct and investigate a hierarchy of spatio-temporal formalisms that result from various combinations of propositional spatial and temporal logics such as the propositional temporal logic PTL, the spatial logics RCC-8, BRCC-8, S4u and their fragments. The obtained results give a clear picture of the trade-off between expressiveness and computational realisability within the hierarchy. We demonstrate how different combining principles as well as spatial and temporal primitives can produce NP-, PSPACE-, EXPSPACE-, 2EXPSPACE-complete, and even undecidable spatio-temporal logics out of components that are at most NP- or PSPACE-complete. ",Combining Spatial and Temporal Logics: Expressiveness vs. Complexity " The treatment of exogenous events in planning is practically important in many real-world domains where the preconditions of certain plan actions are affected by such events. In this paper we focus on planning in temporal domains with exogenous events that happen at known times, imposing the constraint that certain actions in the plan must be executed during some predefined time windows. When actions have durations, handling such temporal constraints adds an extra difficulty to planning. We propose an approach to planning in these domains which integrates constraint-based temporal reasoning into a graph-based planning framework using local search. Our techniques are implemented in a planner that took part in the 4th International Planning Competition (IPC-4). A statistical analysis of the results of IPC-4 demonstrates the effectiveness of our approach in terms of both CPU-time and plan quality. Additional experiments show the good performance of the temporal reasoning techniques integrated into our planner. ","An Approach to Temporal Planning and Scheduling in Domains with Predictable Exogenous Events" " In this commentary I argue that although PDDL is a very useful standard for the planning competition, its design does not properly consider the issue of domain modeling. Hence, I would not advocate its use in specifying planning domains outside of the context of the planning competition. Rather, the field needs to explore different approaches and grapple more directly with the problem of effectively modeling and utilizing all of the diverse pieces of knowledge we typically have about planning domains. ",The Power of Modeling - a Response to PDDL2.1 " PDDL was originally conceived and constructed as a lingua franca for the International Planning Competition. PDDL2.1 embodies a set of extensions intended to support the expression of something closer to real planning problems. This objective has only been partially achieved, due in large part to a deliberate focus on not moving too far from classical planning models and solution methods. ",Imperfect Match: PDDL 2.1 and Real Applications " I comment on the PDDL 2.1 language and its use in the planning competition, focusing on the choices made for accommodating time and concurrency. I also discuss some methodological issues that have to do with the move toward more expressive planning languages and the balance needed in planning research between semantics and computation. ",PDDL 2.1: Representation vs. Computation " Most classical scheduling formulations assume a fixed and known duration for each activity. In this paper, we weaken this assumption, requiring instead that each duration can be represented by an independent random variable with a known mean and variance. The best solutions are ones which have a high probability of achieving a good makespan. We first create a theoretical framework, formally showing how Monte Carlo simulation can be combined with deterministic scheduling algorithms to solve this problem. We propose an associated deterministic scheduling problem whose solution is proved, under certain conditions, to be a lower bound for the probabilistic problem. We then propose and investigate a number of techniques for solving such problems based on combinations of Monte Carlo simulation, solutions to the associated deterministic problem, and either constraint programming or tabu search. Our empirical results demonstrate that a combination of the use of the associated deterministic problem and Monte Carlo simulation results in algorithms that scale best both in terms of problem size and uncertainty. Further experiments point to the correlation between the quality of the deterministic solution and the quality of the probabilistic solution as a major factor responsible for this success. ","Proactive Algorithms for Job Shop Scheduling with Probabilistic Durations" " This paper is concerned with a class of algorithms that perform exhaustive search on propositional knowledge bases. We show that each of these algorithms defines and generates a propositional language. Specifically, we show that the trace of a search can be interpreted as a combinational circuit, and a search algorithm then defines a propositional language consisting of circuits that are generated across all possible executions of the algorithm. In particular, we show that several versions of exhaustive DPLL search correspond to such well-known languages as FBDD, OBDD, and a precisely-defined subset of d-DNNF. By thus mapping search algorithms to propositional languages, we provide a uniform and practical framework in which successful search techniques can be harnessed for compilation of knowledge into various languages of interest, and a new methodology whereby the power and limitations of search algorithms can be understood by looking up the tractability and succinctness of the corresponding propositional languages. ",The Language of Search " The best performing algorithms for a particular oversubscribed scheduling application, Air Force Satellite Control Network (AFSCN) scheduling, appear to have little in common. Yet, through careful experimentation and modeling of performance in real problem instances, we can relate characteristics of the best algorithms to characteristics of the application. In particular, we find that plateaus dominate the search spaces (thus favoring algorithms that make larger changes to solutions) and that some randomization in exploration is critical to good performance (due to the lack of gradient information on the plateaus). Based on our explanations of algorithm performance, we develop a new algorithm that combines characteristics of the best performers; the new algorithms performance is better than the previous best. We show how hypothesis driven experimentation and search modeling can both explain algorithm performance and motivate the design of a new algorithm. ","Understanding Algorithm Performance on an Oversubscribed Scheduling Application" " This paper describes Marvin, a planner that competed in the Fourth International Planning Competition (IPC 4). Marvin uses action-sequence-memoisation techniques to generate macro-actions, which are then used during search for a solution plan. We provide an overview of its architecture and search behaviour, detailing the algorithms used. We also empirically demonstrate the effectiveness of its features in various planning domains; in particular, the effects on performance due to the use of macro-actions, the novel features of its search behaviour, and the native support of ADL and Derived Predicates. ",Marvin: A Heuristic Search Planner with Online Macro-Action Learning " We describe how to convert the heuristic search algorithm A* into an anytime algorithm that finds a sequence of improved solutions and eventually converges to an optimal solution. The approach we adopt uses weighted heuristic search to find an approximate solution quickly, and then continues the weighted search to find improved solutions as well as to improve a bound on the suboptimality of the current solution. When the time available to solve a search problem is limited or uncertain, this creates an anytime heuristic search algorithm that allows a flexible tradeoff between search time and solution quality. We analyze the properties of the resulting Anytime A* algorithm, and consider its performance in three domains; sliding-tile puzzles, STRIPS planning, and multiple sequence alignment. To illustrate the generality of this approach, we also describe how to transform the memory-efficient search algorithm Recursive Best-First Search (RBFS) into an anytime algorithm. ",Anytime Heuristic Search " In this paper we apply computer-aided theorem discovery technique to discover theorems about strongly equivalent logic programs under the answer set semantics. Our discovered theorems capture new classes of strongly equivalent logic programs that can lead to new program simplification rules that preserve strong equivalence. Specifically, with the help of computers, we discovered exact conditions that capture the strong equivalence between a rule and the empty set, between two rules, between two rules and one of the two rules, between two rules and another rule, and between three rules and two of the three rules. ",Discovering Classes of Strongly Equivalent Logic Programs " The QXORSAT problem is the quantified version of the satisfiability problem XORSAT in which the connective exclusive-or is used instead of the usual or. We study the phase transition associated with random QXORSAT instances. We give a description of this phase transition in the case of one alternation of quantifiers, thus performing an advanced practical and theoretical study on the phase transition of a quantified roblem. ",Phase Transition for Random Quantified XOR-Formulas " The paper presents a new sampling methodology for Bayesian networks that samples only a subset of variables and applies exact inference to the rest. Cutset sampling is a network structure-exploiting application of the Rao-Blackwellisation principle to sampling in Bayesian networks. It improves convergence by exploiting memory-based inference algorithms. It can also be viewed as an anytime approximation of the exact cutset-conditioning algorithm developed by Pearl. Cutset sampling can be implemented efficiently when the sampled variables constitute a loop-cutset of the Bayesian network and, more generally, when the induced width of the networks graph conditioned on the observed sampled variables is bounded by a constant w. We demonstrate empirically the benefit of this scheme on a range of benchmarks. ",Cutset Sampling for Bayesian Networks " Numerous formalisms and dedicated algorithms have been designed in the last decades to model and solve decision making problems. Some formalisms, such as constraint networks, can express ""simple"" decision problems, while others are designed to take into account uncertainties, unfeasible decisions, and utilities. Even in a single formalism, several variants are often proposed to model different types of uncertainty (probability, possibility...) or utility (additive or not). In this article, we introduce an algebraic graphical model that encompasses a large number of such formalisms: (1) we first adapt previous structures from Friedman, Chu and Halpern for representing uncertainty, utility, and expected utility in order to deal with generic forms of sequential decision making; (2) on these structures, we then introduce composite graphical models that express information via variables linked by ""local"" functions, thanks to conditional independence; (3) on these graphical models, we finally define a simple class of queries which can represent various scenarios in terms of observabilities and controllabilities. A natural decision-tree semantics for such queries is completed by an equivalent operational semantics, which induces generic algorithms. The proposed framework, called the Plausibility-Feasibility-Utility (PFU) framework, not only provides a better understanding of the links between existing formalisms, but it also covers yet unpublished frameworks (such as possibilistic influence diagrams) and unifies formalisms such as quantified boolean formulas and influence diagrams. Our backtrack and variable elimination generic algorithms are a first step towards unified algorithms. ","An Algebraic Graphical Model for Decision with Uncertainties, Feasibilities, and Utilities" " Matchmaking arises when supply and demand meet in an electronic marketplace, or when agents search for a web service to perform some task, or even when recruiting agencies match curricula and job profiles. In such open environments, the objective of a matchmaking process is to discover best available offers to a given request. We address the problem of matchmaking from a knowledge representation perspective, with a formalization based on Description Logics. We devise Concept Abduction and Concept Contraction as non-monotonic inferences in Description Logics suitable for modeling matchmaking in a logical framework, and prove some related complexity results. We also present reasonable algorithms for semantic matchmaking based on the devised inferences, and prove that they obey to some commonsense properties. Finally, we report on the implementation of the proposed matchmaking framework, which has been used both as a mediator in e-marketplaces and for semantic web services discovery. ","Semantic Matchmaking as Non-Monotonic Reasoning: A Description Logic Approach" " Solution-Guided Multi-Point Constructive Search (SGMPCS) is a novel constructive search technique that performs a series of resource-limited tree searches where each search begins either from an empty solution (as in randomized restart) or from a solution that has been encountered during the search. A small number of these ""elite solutions is maintained during the search. We introduce the technique and perform three sets of experiments on the job shop scheduling problem. First, a systematic, fully crossed study of SGMPCS is carried out to evaluate the performance impact of various parameter settings. Second, we inquire into the diversity of the elite solution set, showing, contrary to expectations, that a less diverse set leads to stronger performance. Finally, we compare the best parameter setting of SGMPCS from the first two experiments to chronological backtracking, limited discrepancy search, randomized restart, and a sophisticated tabu search algorithm on a set of well-known benchmark problems. Results demonstrate that SGMPCS is significantly better than the other constructive techniques tested, though lags behind the tabu search. ",Solution-Guided Multi-Point Constructive Search for Job Shop Scheduling " This essay explores the limits of Turing machines concerning the modeling of minds and suggests alternatives to go beyond those limits. ",Are Minds Computable? " This paper compares various optimization methods for fuzzy inference system optimization. The optimization methods compared are genetic algorithm, particle swarm optimization and simulated annealing. When these techniques were implemented it was observed that the performance of each technique within the fuzzy inference system classification was context dependent. ",Fuzzy Inference Systems Optimization " We introduce matrix and its block to the Dung's theory of argumentation framework. It is showed that each argumentation framework has a matrix representation, and the indirect attack relation and indirect defence relation can be characterized by computing the matrix. This provide a powerful mathematics way to determine the ""controversial arguments"" in an argumentation framework. Also, we introduce several kinds of blocks based on the matrix, and various prudent semantics of argumentation frameworks can all be determined by computing and comparing the matrices and their blocks which we have defined. In contrast with traditional method of directed graph, the matrix method has an excellent advantage: computability(even can be realized on computer easily). So, there is an intensive perspective to import the theory of matrix to the research of argumentation frameworks and its related areas. ",Handling controversial arguments by matrix " Real-time search methods are suited for tasks in which the agent is interacting with an initially unknown environment in real time. In such simultaneous planning and learning problems, the agent has to select its actions in a limited amount of time, while sensing only a local part of the environment centered at the agents current location. Real-time heuristic search agents select actions using a limited lookahead search and evaluating the frontier states with a heuristic function. Over repeated experiences, they refine heuristic values of states to avoid infinite loops and to converge to better solutions. The wide spread of such settings in autonomous software and hardware agents has led to an explosion of real-time search algorithms over the last two decades. Not only is a potential user confronted with a hodgepodge of algorithms, but he also faces the choice of control parameters they use. In this paper we address both problems. The first contribution is an introduction of a simple three-parameter framework (named LRTS) which extracts the core ideas behind many existing algorithms. We then prove that LRTA*, epsilon-LRTA*, SLA*, and gamma-Trap algorithms are special cases of our framework. Thus, they are unified and extended with additional features. Second, we prove completeness and convergence of any algorithm covered by the LRTS framework. Third, we prove several upper-bounds relating the control parameters and solution quality. Finally, we analyze the influence of the three control parameters empirically in the realistic scalable domains of real-time navigation on initially unknown maps from a commercial role-playing game as well as routing in ad hoc sensor networks. ",Learning in Real-Time Search: A Unifying Framework " This paper introduces the SEQ BIN meta-constraint with a polytime algorithm achieving general- ized arc-consistency according to some properties. SEQ BIN can be used for encoding counting con- straints such as CHANGE, SMOOTH or INCREAS- ING NVALUE. For some of these constraints and some of their variants GAC can be enforced with a time and space complexity linear in the sum of domain sizes, which improves or equals the best known results of the literature. ","A Generalized Arc-Consistency Algorithm for a Class of Counting Constraints: Revised Edition that Incorporates One Correction" " The Taaable projet goal is to create a case-based reasoning system for retrieval and adaptation of cooking recipes. Within this framework, we are discussing the temporal aspects of recipes and the means of representing those in order to adapt their text. ","Quels formalismes temporels pour repr\'esenter des connaissances extraites de textes de recettes de cuisine ?" " Designing component-based constraint solvers is a complex problem. Some components are required, some are optional and there are interdependencies between the components. Because of this, previous approaches to solver design and modification have been ad-hoc and limited. We present a system that transforms a description of the components and the characteristics of the target constraint solver into a constraint problem. Solving this problem yields the description of a valid solver. Our approach represents a significant step towards the automated design and synthesis of constraint solvers that are specialised for individual constraint problem classes or instances. ",Modelling Constraint Solver Architecture Design as a Constraint Problem " Classification of targets by radar has proved to be notoriously difficult with the best systems still yet to attain sufficiently high levels of performance and reliability. In the current contribution we explore a new design of radar based target recognition, where angular diversity is used in a cognitive manner to attain better performance. Performance is bench- marked against conventional classification schemes. The proposed scheme can easily be extended to cognitive target recognition based on multiple diversity strategies. ",A cognitive diversity framework for radar target classification " It is widely recognized today that the management of imprecision and vagueness will yield more intelligent and realistic knowledge-based applications. Description Logics (DLs) are a family of knowledge representation languages that have gained considerable attention the last decade, mainly due to their decidability and the existence of empirically high performance of reasoning algorithms. In this paper, we extend the well known fuzzy ALC DL to the fuzzy SHIN DL, which extends the fuzzy ALC DL with transitive role axioms (S), inverse roles (I), role hierarchies (H) and number restrictions (N). We illustrate why transitive role axioms are difficult to handle in the presence of fuzzy interpretations and how to handle them properly. Then we extend these results by adding role hierarchies and finally number restrictions. The main contributions of the paper are the decidability proof of the fuzzy DL languages fuzzy-SI and fuzzy-SHIN, as well as decision procedures for the knowledge base satisfiability problem of the fuzzy-SI and fuzzy-SHIN. ",Reasoning with Very Expressive Fuzzy Description Logics " Exact Max-SAT solvers, compared with SAT solvers, apply little inference at each node of the proof tree. Commonly used SAT inference rules like unit propagation produce a simplified formula that preserves satisfiability but, unfortunately, solving the Max-SAT problem for the simplified formula is not equivalent to solving it for the original formula. In this paper, we define a number of original inference rules that, besides being applied efficiently, transform Max-SAT instances into equivalent Max-SAT instances which are easier to solve. The soundness of the rules, that can be seen as refinements of unit resolution adapted to Max-SAT, are proved in a novel and simple way via an integer programming transformation. With the aim of finding out how powerful the inference rules are in practice, we have developed a new Max-SAT solver, called MaxSatz, which incorporates those rules, and performed an experimental investigation. The results provide empirical evidence that MaxSatz is very competitive, at least, on random Max-2SAT, random Max-3SAT, Max-Cut, and Graph 3-coloring instances, as well as on the benchmarks from the Max-SAT Evaluation 2006. ",New Inference Rules for Max-SAT " Reputation mechanisms offer an effective alternative to verification authorities for building trust in electronic markets with moral hazard. Future clients guide their business decisions by considering the feedback from past transactions; if truthfully exposed, cheating behavior is sanctioned and thus becomes irrational. It therefore becomes important to ensure that rational clients have the right incentives to report honestly. As an alternative to side-payment schemes that explicitly reward truthful reports, we show that honesty can emerge as a rational behavior when clients have a repeated presence in the market. To this end we describe a mechanism that supports an equilibrium where truthful feedback is obtained. Then we characterize the set of pareto-optimal equilibria of the mechanism, and derive an upper bound on the percentage of false reports that can be recorded by the mechanism. An important role in the existence of this bound is played by the fact that rational clients can establish a reputation for reporting honestly. ",Obtaining Reliable Feedback for Sanctioning Reputation Mechanisms " We present a new algorithm for probabilistic planning with no observability. Our algorithm, called Probabilistic-FF, extends the heuristic forward-search machinery of Conformant-FF to problems with probabilistic uncertainty about both the initial state and action effects. Specifically, Probabilistic-FF combines Conformant-FFs techniques with a powerful machinery for weighted model counting in (weighted) CNFs, serving to elegantly define both the search space and the heuristic function. Our evaluation of Probabilistic-FF shows its fine scalability in a range of probabilistic domains, constituting a several orders of magnitude improvement over previous results in this area. We use a problematic case to point out the main open issue to be addressed by further research. ","Probabilistic Planning via Heuristic Forward Search and Weighted Model Counting" " Conjunctive queries play an important role as an expressive query language for Description Logics (DLs). Although modern DLs usually provide for transitive roles, conjunctive query answering over DL knowledge bases is only poorly understood if transitive roles are admitted in the query. In this paper, we consider unions of conjunctive queries over knowledge bases formulated in the prominent DL SHIQ and allow transitive roles in both the query and the knowledge base. We show decidability of query answering in this setting and establish two tight complexity bounds: regarding combined complexity, we prove that there is a deterministic algorithm for query answering that needs time single exponential in the size of the KB and double exponential in the size of the query, which is optimal. Regarding data complexity, we prove containment in co-NP. ",Conjunctive Query Answering for the Description Logic SHIQ " Experience in the physical sciences suggests that the only realistic means of understanding complex systems is through the use of mathematical models. Typically, this has come to mean the identification of quantitative models expressed as differential equations. Quantitative modelling works best when the structure of the model (i.e., the form of the equations) is known; and the primary concern is one of estimating the values of the parameters in the model. For complex biological systems, the model-structure is rarely known and the modeler has to deal with both model-identification and parameter-estimation. In this paper we are concerned with providing automated assistance to the first of these problems. Specifically, we examine the identification by machine of the structural relationships between experimentally observed variables. These relationship will be expressed in the form of qualitative abstractions of a quantitative model. Such qualitative models may not only provide clues to the precise quantitative model, but also assist in understanding the essence of that model. Our position in this paper is that background knowledge incorporating system modelling principles can be used to constrain effectively the set of good qualitative models. Utilising the model-identification framework provided by Inductive Logic Programming (ILP) we present empirical support for this position using a series of increasingly complex artificial datasets. The results are obtained with qualitative and quantitative data subject to varying amounts of noise and different degrees of sparsity. The results also point to the presence of a set of qualitative states, which we term kernel subsets, that may be necessary for a qualitative model-learner to learn correct models. We demonstrate scalability of the method to biological system modelling by identification of the glycolysis metabolic pathway from data. ",Qualitative System Identification from Imperfect Data " Multi-robot path planning is difficult due to the combinatorial explosion of the search space with every new robot added. Complete search of the combined state-space soon becomes intractable. In this paper we present a novel form of abstraction that allows us to plan much more efficiently. The key to this abstraction is the partitioning of the map into subgraphs of known structure with entry and exit restrictions which we can represent compactly. Planning then becomes a search in the much smaller space of subgraph configurations. Once an abstract plan is found, it can be quickly resolved into a correct (but possibly sub-optimal) concrete plan without the need for further search. We prove that this technique is sound and complete and demonstrate its practical effectiveness on a real map. A contending solution, prioritised planning, is also evaluated and shown to have similar performance albeit at the cost of completeness. The two approaches are not necessarily conflicting; we demonstrate how they can be combined into a single algorithm which outperforms either approach alone. ",Exploiting Subgraph Structure in Multi-Robot Path Planning " Ontologies and automated reasoning are the building blocks of the Semantic Web initiative. Derivation rules can be included in an ontology to define derived concepts, based on base concepts. For example, rules allow to define the extension of a class or property, based on a complex relation between the extensions of the same or other classes and properties. On the other hand, the inclusion of negative information both in the form of negation-as-failure and explicit negative information is also needed to enable various forms of reasoning. In this paper, we extend RDF graphs with weak and strong negation, as well as derivation rules. The ERDF stable model semantics of the extended framework (Extended RDF) is defined, extending RDF(S) semantics. A distinctive feature of our theory, which is based on Partial Logic, is that both truth and falsity extensions of properties and classes are considered, allowing for truth value gaps. Our framework supports both closed-world and open-world reasoning through the explicit representation of the particular closed-world assumptions and the ERDF ontological categories of total properties and total classes. ",Extended RDF as a Semantic Foundation of Rule Markup Languages " We present three new complexity results for classes of planning problems with simple causal graphs. First, we describe a polynomial-time algorithm that uses macros to generate plans for the class 3S of planning problems with binary state variables and acyclic causal graphs. This implies that plan generation may be tractable even when a planning problem has an exponentially long minimal solution. We also prove that the problem of plan existence for planning problems with multi-valued variables and chain causal graphs is NP-hard. Finally, we show that plan existence for planning problems with binary state variables and polytree causal graphs is NP-complete. ",The Complexity of Planning Problems With Simple Causal Graphs " We represent planning as a set of loosely coupled network flow problems, where each network corresponds to one of the state variables in the planning domain. The network nodes correspond to the state variable values and the network arcs correspond to the value transitions. The planning problem is to find a path (a sequence of actions) in each network such that, when merged, they constitute a feasible plan. In this paper we present a number of integer programming formulations that model these loosely coupled networks with varying degrees of flexibility. Since merging may introduce exponentially many ordering constraints we implement a so-called branch-and-cut algorithm, in which these constraints are dynamically generated and added to the formulation when needed. Our results are very promising, they improve upon previous planning as integer programming approaches and lay the foundation for integer programming approaches for cost optimal planning. ","Loosely Coupled Formulations for Automated Planning: An Integer Programming Perspective" " In a facility with front room and back room operations, it is useful to switch workers between the rooms in order to cope with changing customer demand. Assuming stochastic customer arrival and service times, we seek a policy for switching workers such that the expected customer waiting time is minimized while the expected back room staffing is sufficient to perform all work. Three novel constraint programming models and several shaving procedures for these models are presented. Experimental results show that a model based on closed-form expressions together with a combination of shaving procedures is the most efficient. This model is able to find and prove optimal solutions for many problem instances within a reasonable run-time. Previously, the only available approach was a heuristic algorithm. Furthermore, a hybrid method combining the heuristic and the best constraint programming method is shown to perform as well as the heuristic in terms of solution quality over time, while achieving the same performance in terms of proving optimality as the pure constraint programming model. This is the first work of which we are aware that solves such queueing-based problems with constraint programming. ",A Constraint Programming Approach for Solving a Queueing Control Problem " Decision-theoretic planning is a popular approach to sequential decision making problems, because it treats uncertainty in sensing and acting in a principled way. In single-agent frameworks like MDPs and POMDPs, planning can be carried out by resorting to Q-value functions: an optimal Q-value function Q* is computed in a recursive manner by dynamic programming, and then an optimal policy is extracted from Q*. In this paper we study whether similar Q-value functions can be defined for decentralized POMDP models (Dec-POMDPs), and how policies can be extracted from such value functions. We define two forms of the optimal Q-value function for Dec-POMDPs: one that gives a normative description as the Q-value function of an optimal pure joint policy and another one that is sequentially rational and thus gives a recipe for computation. This computation, however, is infeasible for all but the smallest problems. Therefore, we analyze various approximate Q-value functions that allow for efficient computation. We describe how they relate, and we prove that they all provide an upper bound to the optimal Q-value function Q*. Finally, unifying some previous approaches for solving Dec-POMDPs, we describe a family of algorithms for extracting policies from such Q-value functions, and perform an experimental evaluation on existing test problems, including a new firefighting benchmark problem. ",Optimal and Approximate Q-value Functions for Decentralized POMDPs " Multi-agent planning in stochastic environments can be framed formally as a decentralized Markov decision problem. Many real-life distributed problems that arise in manufacturing, multi-robot coordination and information gathering scenarios can be formalized using this framework. However, finding the optimal solution in the general case is hard, limiting the applicability of recently developed algorithms. This paper provides a practical approach for solving decentralized control problems when communication among the decision makers is possible, but costly. We develop the notion of communication-based mechanism that allows us to decompose a decentralized MDP into multiple single-agent problems. In this framework, referred to as decentralized semi-Markov decision process with direct communication (Dec-SMDP-Com), agents operate separately between communications. We show that finding an optimal mechanism is equivalent to solving optimally a Dec-SMDP-Com. We also provide a heuristic search algorithm that converges on the optimal decomposition. Restricting the decomposition to some specific types of local behaviors reduces significantly the complexity of planning. In particular, we present a polynomial-time algorithm for the case in which individual agents perform goal-oriented behaviors between communications. The paper concludes with an additional tractable algorithm that enables the introduction of human knowledge, thereby reducing the overall problem to finding the best time to communicate. Empirical results show that these approaches provide good approximate solutions. ",Communication-Based Decomposition Mechanisms for Decentralized MDPs " Informally, a set of abstractions of a state space S is additive if the distance between any two states in S is always greater than or equal to the sum of the corresponding distances in the abstract spaces. The first known additive abstractions, called disjoint pattern databases, were experimentally demonstrated to produce state of the art performance on certain state spaces. However, previous applications were restricted to state spaces with special properties, which precludes disjoint pattern databases from being defined for several commonly used testbeds, such as Rubiks Cube, TopSpin and the Pancake puzzle. In this paper we give a general definition of additive abstractions that can be applied to any state space and prove that heuristics based on additive abstractions are consistent as well as admissible. We use this new definition to create additive abstractions for these testbeds and show experimentally that well chosen additive abstractions can reduce search time substantially for the (18,4)-TopSpin puzzle and by three orders of magnitude over state of the art methods for the 17-Pancake puzzle. We also derive a way of testing if the heuristic value returned by additive abstractions is provably too low and show that the use of this test can reduce search time for the 15-puzzle and TopSpin by roughly a factor of two. ",A General Theory of Additive State Space Abstractions " Markov decision processes capture sequential decision making under uncertainty, where an agent must choose actions so as to optimize long term reward. The paper studies efficient reasoning mechanisms for Relational Markov Decision Processes (RMDP) where world states have an internal relational structure that can be naturally described in terms of objects and relations among them. Two contributions are presented. First, the paper develops First Order Decision Diagrams (FODD), a new compact representation for functions over relational structures, together with a set of operators to combine FODDs, and novel reduction techniques to keep the representation small. Second, the paper shows how FODDs can be used to develop solutions for RMDPs, where reasoning is performed at the abstract level and the resulting optimal policy is independent of domain size (number of objects) or instantiation. In particular, a variant of the value iteration algorithm is developed by using special operations over FODDs, and the algorithm is shown to converge to the optimal policy. ",First Order Decision Diagrams for Relational MDPs " Resolution is the rule of inference at the basis of most procedures for automated reasoning. In these procedures, the input formula is first translated into an equisatisfiable formula in conjunctive normal form (CNF) and then represented as a set of clauses. Deduction starts by inferring new clauses by resolution, and goes on until the empty clause is generated or satisfiability of the set of clauses is proven, e.g., because no new clauses can be generated. In this paper, we restrict our attention to the problem of evaluating Quantified Boolean Formulas (QBFs). In this setting, the above outlined deduction process is known to be sound and complete if given a formula in CNF and if a form of resolution, called Q-resolution, is used. We introduce Q-resolution on terms, to be used for formulas in disjunctive normal form. We show that the computation performed by most of the available procedures for QBFs --based on the Davis-Logemann-Loveland procedure (DLL) for propositional satisfiability-- corresponds to a tree in which Q-resolution on terms and clauses alternate. This poses the theoretical bases for the introduction of learning, corresponding to recording Q-resolution formulas associated with the nodes of the tree. We discuss the problems related to the introduction of learning in DLL based procedures, and present solutions extending state-of-the-art proposals coming from the literature on propositional satisfiability. Finally, we show that our DLL based solver extended with learning, performs significantly better on benchmarks used in the 2003 QBF solvers comparative evaluation. ","Clause/Term Resolution and Learning in the Evaluation of Quantified Boolean Formulas" " The theoretical transition from the graphs of production systems to the bipartite graphs of the MIVAR nets is shown. Examples of the implementation of the MIVAR nets in the formalisms of matrixes and graphs are given. The linear computational complexity of algorithms for automated building of objects and rules of the MIVAR nets is theoretically proved. On the basis of the MIVAR nets the UDAV software complex is developed, handling more than 1.17 million objects and more than 3.5 million rules on ordinary computers. The results of experiments that confirm a linear computational complexity of the MIVAR method of information processing are given. Keywords: MIVAR, MIVAR net, logical inference, computational complexity, artificial intelligence, intelligent systems, expert systems, General Problem Solver. ","MIVAR: Transition from Productions to Bipartite Graphs MIVAR Nets and Practical Realization of Automated Constructor of Algorithms Handling More than Three Million Production Rules" " Description logic programs (dl-programs) under the answer set semantics formulated by Eiter {\em et al.} have been considered as a prominent formalism for integrating rules and ontology knowledge bases. A question of interest has been whether dl-programs can be captured in a general formalism of nonmonotonic logic. In this paper, we study the possibility of embedding dl-programs into default logic. We show that dl-programs under the strong and weak answer set semantics can be embedded in default logic by combining two translations, one of which eliminates the constraint operator from nonmonotonic dl-atoms and the other translates a dl-program into a default theory. For dl-programs without nonmonotonic dl-atoms but with the negation-as-failure operator, our embedding is polynomial, faithful, and modular. In addition, our default logic encoding can be extended in a simple way to capture recently proposed weakly well-supported answer set semantics, for arbitrary dl-programs. These results reinforce the argument that default logic can serve as a fruitful foundation for query-based approaches to integrating ontology and rules. With its simple syntax and intuitive semantics, plus available computational results, default logic can be considered an attractive approach to integration of ontology and rules. ",Embedding Description Logic Programs into Default Logic " Electronic government (e-government) has been one of the most active areas of ontology development during the past six years. In e-government, ontologies are being used to describe and specify e-government services (e-services) because they enable easy composition, matching, mapping and merging of various e-government services. More importantly, they also facilitate the semantic integration and interoperability of e-government services. However, it is still unclear in the current literature how an existing ontology building methodology can be applied to develop semantic ontology models in a government service domain. In this paper the Uschold and King ontology building methodology is applied to develop semantic ontology models in a government service domain. Firstly, the Uschold and King methodology is presented, discussed and applied to build a government domain ontology. Secondly, the domain ontology is evaluated for semantic consistency using its semi-formal representation in Description Logic. Thirdly, an alignment of the domain ontology with the Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) upper level ontology is drawn to allow its wider visibility and facilitate its integration with existing metadata standard. Finally, the domain ontology is formally written in Web Ontology Language (OWL) to enable its automatic processing by computers. The study aims to provide direction for the application of existing ontology building methodologies in the Semantic Web development processes of e-government domain specific ontology models; which would enable their repeatability in other e-government projects and strengthen the adoption of semantic technologies in e-government. ","Semantic-Driven e-Government: Application of Uschold and King Ontology Building Methodology for Semantic Ontology Models Development" " It has been widely observed that there is no single ""dominant"" SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the best solver for a given class of instances, we advocate making this decision online on a per-instance basis. Building on previous work, we describe SATzilla, an automated approach for constructing per-instance algorithm portfolios for SAT that use so-called empirical hardness models to choose among their constituent solvers. This approach takes as input a distribution of problem instances and a set of component solvers, and constructs a portfolio optimizing a given objective function (such as mean runtime, percent of instances solved, or score in a competition). The excellent performance of SATzilla was independently verified in the 2007 SAT Competition, where our SATzilla07 solvers won three gold, one silver and one bronze medal. In this article, we go well beyond SATzilla07 by making the portfolio construction scalable and completely automated, and improving it by integrating local search solvers as candidate solvers, by predicting performance score instead of runtime, and by using hierarchical hardness models that take into account different types of SAT instances. We demonstrate the effectiveness of these new techniques in extensive experimental results on data sets including instances from the most recent SAT competition. ",SATzilla: Portfolio-based Algorithm Selection for SAT " This paper shows that maintaining logical consistency of an iris recognition system is a matter of finding a suitable partitioning of the input space in enrollable and unenrollable pairs by negotiating the user comfort and the safety of the biometric system. In other words, consistent enrollment is mandatory in order to preserve system consistency. A fuzzy 3-valued disambiguated model of iris recognition is proposed and analyzed in terms of completeness, consistency, user comfort and biometric safety. It is also shown here that the fuzzy 3-valued model of iris recognition is hosted by an 8-valued Boolean algebra of modulo 8 integers that represents the computational formalization in which a biometric system (a software agent) can achieve the artificial understanding of iris recognition in a logically consistent manner. ",8-Valent Fuzzy Logic for Iris Recognition and Biometry " In voting contexts, some new candidates may show up in the course of the process. In this case, we may want to determine which of the initial candidates are possible winners, given that a fixed number $k$ of new candidates will be added. We give a computational study of this problem, focusing on scoring rules, and we provide a formal comparison with related problems such as control via adding candidates or cloning. ","New Candidates Welcome! Possible Winners with respect to the Addition of New Candidates" " Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions. ",Model-based Utility Functions " We show that estimating the complexity (mean and distribution) of the instances of a fixed size Constraint Satisfaction Problem (CSP) can be very hard. We deal with the main two aspects of the problem: defining a measure of complexity and generating random unbiased instances. For the first problem, we rely on a general framework and a measure of complexity we presented at CISSE08. For the generation problem, we restrict our analysis to the Sudoku example and we provide a solution that also explains why it is so difficult. ",Unbiased Statistics of a CSP - A Controlled-Bias Generator " Trying to be effective (no matter who exactly and in what field) a person face the problem which inevitably destroys all our attempts to easily get to a desired goal. The problem is the existence of some insuperable barriers for our mind, anotherwords barriers for principles of thinking. They are our clue and main reason for research. Here we investigate these barriers and their features exposing the nature of mental process. We start from special structures which reflect the ways to define relations between objects. Then we came to realizing about what is the material our mind uses to build thoughts, to make conclusions, to understand, to form reasoning, etc. This can be called a mental dynamics. After this the nature of mental barriers on the required level of abstraction as well as the ways to pass through them became clear. We begin to understand why thinking flows in such a way, with such specifics and with such limitations we can observe in reality. This can help us to be more optimal. At the final step we start to understand, what ma-thematical models can be applied to such a picture. We start to express our thoughts in a language of mathematics, developing an apparatus for our Spatial Theory of Mind, suitable to represent processes and infrastructure of thinking. We use abstract algebra and stay invariant in relation to the nature of objects. ",A Model of Spatial Thinking for Computational Intelligence " In this paper, we investigate the problem of mining numerical data in the framework of Formal Concept Analysis. The usual way is to use a scaling procedure --transforming numerical attributes into binary ones-- leading either to a loss of information or of efficiency, in particular w.r.t. the volume of extracted patterns. By contrast, we propose to directly work on numerical data in a more precise and efficient way, and we prove it. For that, the notions of closed patterns, generators and equivalent classes are revisited in the numerical context. Moreover, two original algorithms are proposed and used in an evaluation involving real-world data, showing the predominance of the present approach. ",Revisiting Numerical Pattern Mining with Formal Concept Analysis " We identify principles characterizing Solomonoff Induction by demands on an agent's external behaviour. Key concepts are rationality, computability, indifference and time consistency. Furthermore, we discuss extensions to the full AI case to derive AIXI. ",Principles of Solomonoff Induction and AIXI " The use of patterns in predictive models is a topic that has received a lot of attention in recent years. Pattern mining can help to obtain models for structured domains, such as graphs and sequences, and has been proposed as a means to obtain more accurate and more interpretable models. Despite the large amount of publications devoted to this topic, we believe however that an overview of what has been accomplished in this area is missing. This paper presents our perspective on this evolving area. We identify the principles of pattern mining that are important when mining patterns for models and provide an overview of pattern-based classification methods. We categorize these methods along the following dimensions: (1) whether they post-process a pre-computed set of patterns or iteratively execute pattern mining algorithms; (2) whether they select patterns model-independently or whether the pattern selection is guided by a model. We summarize the results that have been obtained for each of these methods. ",Pattern-Based Classification: A Unifying Perspective " Nowadays, e-government has emerged as a government policy to improve the quality and efficiency of public administrations. By exploiting the potential of new information and communication technologies, government agencies are providing a wide spectrum of online services. These services are composed of several web services that comply with well defined processes. One of the big challenges is the need to optimize the composition of the elementary web services. In this paper, we present a solution for optimizing the computation effort in web service composition. Our method is based on Graph Theory. We model the semantic relationship between the involved web services through a directed graph. Then, we compute all shortest paths using for the first time, an extended version of the Floyd-Warshall algorithm. ",Graph based E-Government web service composition " With the fast growth of the Internet, more and more information is available on the Web. The Semantic Web has many features which cannot be handled by using the traditional search engines. It extracts metadata for each discovered Web documents in RDF or OWL formats, and computes relations between documents. We proposed a hybrid indexing and ranking technique for the Semantic Web which finds relevant documents and computes the similarity among a set of documents. First, it returns with the most related document from the repository of Semantic Web Documents (SWDs) by using a modified version of the ObjectRank technique. Then, it creates a sub-graph for the most related SWDs. Finally, It returns the hubs and authorities of these document by using the HITS algorithm. Our technique increases the quality of the results and decreases the execution time of processing the user's query. ",An Enhanced Indexing And Ranking Technique On The Semantic Web " This paper presents an algorithm for learning a highly redundant inverse model in continuous and non-preset environments. Our Socially Guided Intrinsic Motivation by Demonstrations (SGIM-D) algorithm combines the advantages of both social learning and intrinsic motivation, to specialise in a wide range of skills, while lessening its dependence on the teacher. SGIM-D is evaluated on a fishing skill learning experiment. ","Constraining the Size Growth of the Task Space with Socially Guided Intrinsic Motivation using Demonstrations" " We consider an extension of the setting of label ranking, in which the learner is allowed to make predictions in the form of partial instead of total orders. Predictions of that kind are interpreted as a partial abstention: If the learner is not sufficiently certain regarding the relative order of two alternatives, it may abstain from this decision and instead declare these alternatives as being incomparable. We propose a new method for learning to predict partial orders that improves on an existing approach, both theoretically and empirically. Our method is based on the idea of thresholding the probabilities of pairwise preferences between labels as induced by a predicted (parameterized) probability distribution on the set of all rankings. ","Label Ranking with Abstention: Predicting Partial Orders by Thresholding Probability Distributions (Extended Abstract)" " Covering model provides a general framework for granular computing in that overlapping among granules are almost indispensable. For any given covering, both intersection and union of covering blocks containing an element are exploited as granules to form granular worlds at different abstraction levels, respectively, and transformations among these different granular worlds are also discussed. As an application of the presented multi-granular perspective on covering, relational interpretation and axiomization of four types of covering based rough upper approximation operators are investigated, which can be dually applied to lower ones. ",Multi-granular Perspectives on Covering " Slow Feature Analysis (SFA) extracts features representing the underlying causes of changes within a temporally coherent high-dimensional raw sensory input signal. Our novel incremental version of SFA (IncSFA) combines incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, IncSFA adapts along with non-stationary environments, is amenable to episodic training, is not corrupted by outliers, and is covariance-free. These properties make IncSFA a generally useful unsupervised preprocessor for autonomous learning agents and robots. In IncSFA, the CCIPCA and MCA updates take the form of Hebbian and anti-Hebbian updating, extending the biological plausibility of SFA. In both single node and deep network versions, IncSFA learns to encode its input streams (such as high-dimensional video) by informative slow features representing meaningful abstract environmental properties. It can handle cases where batch SFA fails. ","Incremental Slow Feature Analysis: Adaptive and Episodic Learning from High-Dimensional Input Streams" " Many performance metrics have been introduced for the evaluation of classification performance, with different origins and niches of application: accuracy, macro-accuracy, area under the ROC curve, the ROC convex hull, the absolute error, and the Brier score (with its decomposition into refinement and calibration). One way of understanding the relation among some of these metrics is the use of variable operating conditions (either in the form of misclassification costs or class proportions). Thus, a metric may correspond to some expected loss over a range of operating conditions. One dimension for the analysis has been precisely the distribution we take for this range of operating conditions, leading to some important connections in the area of proper scoring rules. However, we show that there is another dimension which has not received attention in the analysis of performance metrics. This new dimension is given by the decision rule, which is typically implemented as a threshold choice method when using scoring models. In this paper, we explore many old and new threshold choice methods: fixed, score-uniform, score-driven, rate-driven and optimal, among others. By calculating the loss of these methods for a uniform range of operating conditions we get the 0-1 loss, the absolute error, the Brier score (mean squared error), the AUC and the refinement loss respectively. This provides a comprehensive view of performance metrics as well as a systematic approach to loss minimisation, namely: take a model, apply several threshold choice methods consistent with the information which is (and will be) available about the operating condition, and compare their expected losses. In order to assist in this procedure we also derive several connections between the aforementioned performance metrics, and we highlight the role of calibration in choosing the threshold choice method. ",Threshold Choice Methods: the Missing Link " Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya's PRISM, Poole's ICL, Raedt et al's ProbLog and Vennekens et al's LPAD, is aimed at combining statistical and logical knowledge representation and inference. A key characteristic of PLP frameworks is that they are conservative extensions to non-probabilistic logic programs which have been widely used for knowledge representation. PLP frameworks extend traditional logic programming semantics to a distribution semantics, where the semantics of a probabilistic logic program is given in terms of a distribution over possible models of the program. However, the inference techniques used in these works rely on enumerating sets of explanations for a query answer. Consequently, these languages permit very limited use of random variables with continuous distributions. In this paper, we present a symbolic inference procedure that uses constraints and represents sets of explanations without enumeration. This permits us to reason over PLPs with Gaussian or Gamma-distributed random variables (in addition to discrete-valued random variables) and linear equality constraints over reals. We develop the inference procedure in the context of PRISM; however the procedure's core ideas can be easily applied to other PLP languages as well. An interesting aspect of our inference procedure is that PRISM's query evaluation process becomes a special case in the absence of any continuous random variables in the program. The symbolic inference procedure enables us to reason over complex probabilistic models such as Kalman filters and a large subclass of Hybrid Bayesian networks that were hitherto not possible in PLP frameworks. (To appear in Theory and Practice of Logic Programming). ","Inference in Probabilistic Logic Programs with Continuous Random Variables" " We consider the task of performing probabilistic inference with probabilistic logical models. Many algorithms for approximate inference with such models are based on sampling. From a logic programming perspective, sampling boils down to repeatedly calling the same queries on a knowledge base composed of a static part and a dynamic part. The larger the static part, the more redundancy there is in these repeated calls. This is problematic since inefficient sampling yields poor approximations. We show how to apply logic program specialization to make sampling-based inference more efficient. We develop an algorithm that specializes the definitions of the query predicates with respect to the static part of the knowledge base. In experiments on real-world data we obtain speedups of up to an order of magnitude, and these speedups grow with the data-size. ","Improving the Efficiency of Approximate Inference for Probabilistic Logical Models by means of Program Specialization" " In this paper, the continuity and strong continuity in domain-free information algebras and labeled information algebras are introduced respectively. A more general concept of continuous function which is defined between two domain-free continuous information algebras is presented. It is shown that, with the operations combination and focusing, the set of all continuous functions between two domain-free s-continuous information algebras forms a new s-continuous information algebra. By studying the relationship between domain-free information algebras and labeled information algebras, it is demonstrated that they do correspond to each other on s-compactness. ",Continuity in Information Algebras " We study propagation of the RegularGcc global constraint. This ensures that each row of a matrix of decision variables satisfies a Regular constraint, and each column satisfies a Gcc constraint. On the negative side, we prove that propagation is NP-hard even under some strong restrictions (e.g. just 3 values, just 4 states in the automaton, or just 5 columns to the matrix). On the positive side, we identify two cases where propagation is fixed parameter tractable. In addition, we show how to improve propagation over a simple decomposition into separate Regular and Gcc constraints by identifying some necessary but insufficient conditions for a solution. We enforce these conditions with some additional weighted row automata. Experimental results demonstrate the potential of these methods on some standard benchmark problems. ",The RegularGcc Matrix Constraint " Fuzzy rule based models have a capability to approximate any continuous function to any degree of accuracy on a compact domain. The majority of FLC design process relies on heuristic knowledge of experience operators. In order to make the design process automatic we present a genetic approach to learn fuzzy rules as well as membership function parameters. Moreover, several statistical information criteria such as the Akaike information criterion (AIC), the Bhansali-Downham information criterion (BDIC), and the Schwarz-Rissanen information criterion (SRIC) are used to construct optimal fuzzy models by reducing fuzzy rules. A genetic scheme is used to design Takagi-Sugeno-Kang (TSK) model for identification of the antecedent rule parameters and the identification of the consequent parameters. Computer simulations are presented confirming the performance of the constructed fuzzy logic controller. ","Optimal Fuzzy Model Construction with Statistical Information using Genetic Algorithm" " We mathematically model Ignacio Matte Blanco's principles of symmetric and asymmetric being through use of an ultrametric topology. We use for this the highly regarded 1975 book of this Chilean psychiatrist and pyschoanalyst (born 1908, died 1995). Such an ultrametric model corresponds to hierarchical clustering in the empirical data, e.g. text. We show how an ultrametric topology can be used as a mathematical model for the structure of the logic that reflects or expresses Matte Blanco's symmetric being, and hence of the reasoning and thought processes involved in conscious reasoning or in reasoning that is lacking, perhaps entirely, in consciousness or awareness of itself. In a companion paper we study how symmetric (in the sense of Matte Blanco's) reasoning can be demarcated in a context of symmetric and asymmetric reasoning provided by narrative text. ","Ultrametric Model of Mind, I: Review" " This paper is the continuation of our research work about linguistic truth-valued concept lattice. In order to provide a mathematical tool for mining tacit knowledge, we establish a concrete model of 6-ary linguistic truth-valued concept lattice and introduce a mining algorithm through the structure consistency. Specifically, we utilize the attributes to depict knowledge, propose the 6-ary linguistic truth-valued attribute extended context and congener context to characterize tacit knowledge, and research the necessary and sufficient conditions of forming tacit knowledge. We respectively give the algorithms of generating the linguistic truth-valued congener context and constructing the linguistic truth-valued concept lattice. ","Tacit knowledge mining algorithm based on linguistic truth-valued concept lattice" " Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate Hash-Distributed A* (HDA*), a simple approach to parallel best-first search that asynchronously distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A* algorithm in an optimal sequential version of the Fast Downward planner, as well as a 24-puzzle solver. The scaling behavior of HDA* is evaluated experimentally on a shared memory, multicore machine with 8 cores, a cluster of commodity machines using up to 64 cores, and large-scale high-performance clusters, using up to 2400 processors. We show that this approach scales well, allowing the effective utilization of large amounts of distributed memory to optimally solve problems which require terabytes of RAM. We also compare HDA* to Transposition-table Driven Scheduling (TDS), a hash-based parallelization of IDA*, and show that, in planning, HDA* significantly outperforms TDS. A simple hybrid which combines HDA* and TDS to exploit strengths of both algorithms is proposed and evaluated. ","Evaluation of a Simple, Scalable, Parallel Best-First Search Strategy" " We review some existing methods for the computation of first order moments on junction trees using Shafer-Shenoy algorithm. First, we consider the problem of first order moments computation as vertices problem in junction trees. In this way, the problem is solved using the memory space of an order of the junction tree edge-set cardinality. After that, we consider two algorithms, Lauritzen-Nilsson algorithm, and Mau\'a et al. algorithm, which computes the first order moments as the normalization problem in junction tree, using the memory space of an order of the junction tree leaf-set cardinality. ",The computation of first order moments on junction trees " We present a technique for the animation of a 3D kinematic tongue model, one component of the talking head of an acoustic-visual (AV) speech synthesizer. The skeletal animation approach is adapted to make use of a deformable rig controlled by tongue motion capture data obtained with electromagnetic articulography (EMA), while the tongue surface is extracted from volumetric magnetic resonance imaging (MRI) data. Initial results are shown and future work outlined. ","Progress in animation of an EMA-controlled tongue model for acoustic-visual speech synthesis" " This paper draws on diverse areas of computer science to develop a unified view of computation: (1) Optimization in operations research, where a numerical objective function is maximized under constraints, is generalized from the numerical total order to a non-numerical partial order that can be interpreted in terms of information. (2) Relations are generalized so that there are relations of which the constituent tuples have numerical indexes, whereas in other relations these indexes are variables. The distinction is essential in our definition of constraint satisfaction problems. (3) Constraint satisfaction problems are formulated in terms of semantics of conjunctions of atomic formulas of predicate logic. (4) Approximation structures, which are available for several important domains, are applied to solutions of constraint satisfaction problems. As application we treat constraint satisfaction problems over reals. These cover a large part of numerical analysis, most significantly nonlinear equations and inequalities. The chaotic algorithm analyzed in the paper combines the efficiency of floating-point computation with the correctness guarantees of arising from our logico-mathematical model of constraint-satisfaction problems. ",Constraint Propagation as Information Maximization " We built a multiagent simulation of urban traffic to model both ordinary traffic and emergency or crisis mode traffic. This simulation first builds a modeled road network based on detailed geographical information. On this network, the simulation creates two populations of agents: the Transporters and the Mobiles. Transporters embody the roads themselves; they are utilitarian and meant to handle the low level realism of the simulation. Mobile agents embody the vehicles that circulate on the network. They have one or several destinations they try to reach using initially their beliefs of the structure of the network (length of the edges, speed limits, number of lanes etc.). Nonetheless, when confronted to a dynamic, emergent prone environment (other vehicles, unexpectedly closed ways or lanes, traffic jams etc.), the rather reactive agent will activate more cognitive modules to adapt its beliefs, desires and intentions. It may change its destination(s), change the tactics used to reach the destination (favoring less used roads, following other agents, using general headings), etc. We describe our current validation of our model and the next planned improvements, both in validation and in functionalities. ",A multiagent urban traffic simulation " After more than sixty years, Shannon's research [1-3] continues to raise fundamental questions, such as the one formulated by Luce [4,5], which is still unanswered: ""Why is information theory not very applicable to psychological problems, despite apparent similarities of concepts?"" On this topic, Pinker [6], one of the foremost defenders of the computational theory of mind [6], has argued that thought is simply a type of computation, and that the gap between human cognition and computational models may be illusory. In this context, in his latest book, titled Thinking Fast and Slow [8], Kahneman [7,8] provides further theoretical interpretation by differentiating the two assumed systems of the cognitive functioning of the human mind. He calls them intuition (system 1) determined to be an associative (automatic, fast and perceptual) machine, and reasoning (system 2) required to be voluntary and to operate logical- deductively. In this paper, we propose an ansatz inspired by Ausubel's learning theory for investigating, from the constructivist perspective [9-12], information processing in the working memory of cognizers. Specifically, a thought experiment is performed utilizing the mind of a dual-natured creature known as Maxwell's demon: a tiny ""man-machine"" solely equipped with the characteristics of system 1, which prevents it from reasoning. The calculation presented here shows that [...]. This result indicates that when the system 2 is shut down, both an intelligent being, as well as a binary machine, incur the same energy cost per unit of information processed, which mathematically proves the computational attribute of the system 1, as Kahneman [7,8] theorized. This finding links information theory to human psychological features and opens a new path toward the conception of a multi-bit reasoning machine. ",The thermodynamic cost of fast thought " The holistic approach to sustainable urban planning implies using different models in an integrated way that is capable of simulating the urban system. As the interconnection of such models is not a trivial task, one of the key elements that may be applied is the description of the urban geometric properties in an ""interoperable"" way. Focusing on air quality as one of the most pronounced urban problems, the geometric aspects of a city may be described by objects such as those defined in CityGML, so that an appropriate air quality model can be applied for estimating the quality of the urban air on the basis of atmospheric flow and chemistry equations. In this paper we first present theoretical background and motivations for the interconnection of 3D city models and other models related to sustainable development and urban planning. Then we present a practical experiment based on the interconnection of CityGML with an air quality model. Our approach is based on the creation of an ontology of air quality models and on the extension of an ontology of urban planning process (OUPP) that acts as an ontology mediator. ",Ontologies for the Integration of Air Quality Models and 3D City Models " In this paper, we will argue that if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. 'intelligence requires a body', the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. A number of case studies are presented to illustrate the concept. These involve animals and robots and are concentrated around locomotion, grasping, and visual perception. A theoretical scheme that can be used to embed the diverse case studies will be presented. Finally, we will establish a link between the low-level sensory-motor processes and cognition. We will present an embodied view on categorization, and propose the concepts of 'body schema' and 'forward models' as a natural extension of the embodied approach toward first representations. ","The implications of embodiment for behavior and cognition: animal and robotic case studies" " This paper analyses the influence of including agents of different degrees of intelligence in a multiagent system. The goal is to better understand how we can develop intelligence tests that can evaluate social intelligence. We analyse several reinforcement algorithms in several contexts of cooperation and competition. Our experimental setting is inspired by the recently developed Darwin-Wallace distribution. ","On the influence of intelligence in (social) intelligence testing environments" " Many methods have been developed to secure the network infrastructure and communication over the Internet. Intrusion detection is a relatively new addition to such techniques. Intrusion detection systems (IDS) are used to find out if someone has intrusion into or is trying to get it the network. One big problem is amount of Intrusion which is increasing day by day. We need to know about network attack information using IDS, then analysing the effect. Due to the nature of IDSs which are solely signature based, every new intrusion cannot be detected; so it is important to introduce artificial intelligence (AI) methods / techniques in IDS. Introduction of AI necessitates the importance of normalization in intrusions. This work is focused on classification of AI based IDS techniques which will help better design intrusion detection systems in the future. We have also proposed a support vector machine for IDS to detect Smurf attack with much reliable accuracy. ",Classification of artificial intelligence ids for smurf attack " Today, University Timetabling problems are occurred annually and they are often hard and time consuming to solve. This paper describes Hyper Heuristics (HH) method based on Great Deluge (GD) and its variants for solving large, highly constrained timetabling problems from different domains. Generally, in hyper heuristic framework, there are two main stages: heuristic selection and move acceptance. This paper emphasizes on the latter stage to develop Hyper Heuristic (HH) framework. The main contribution of this paper is that Great Deluge (GD) and its variants: Flex Deluge(FD), Non-linear(NLGD), Extended Great Deluge(EGD) are used as move acceptance method in HH by combining Reinforcement learning (RL).These HH methods are tested on exam benchmark timetabling problem and best results and comparison analysis are reported. ","Hyper heuristic based on great deluge and its variants for exam timetabling problem" " Data Mining techniques plays a vital role like extraction of required knowledge, finding unsuspected information to make strategic decision in a novel way which in term understandable by domain experts. A generalized frame work is proposed by considering non - domain experts during mining process for better understanding, making better decision and better finding new patters in case of selecting suitable data mining techniques based on the user profile by means of intelligent agents. KEYWORDS: Data Mining Techniques, Intelligent Agents, User Profile, Multidimensional Visualization, Knowledge Discovery. ","A framework: Cluster detection and multidimensional visualization of automated data mining using intelligent agents" " The First-Order Variable Elimination (FOVE) algorithm allows exact inference to be applied directly to probabilistic relational models, and has proven to be vastly superior to the application of standard inference methods on a grounded propositional model. Still, FOVE operators can be applied under restricted conditions, often forcing one to resort to propositional inference. This paper aims to extend the applicability of FOVE by providing two new model conversion operators: the first and the primary is joint formula conversion and the second is just-different counting conversion. These new operations allow efficient inference methods to be applied directly on relational models, where no existing efficient method could be applied hitherto. In addition, aided by these capabilities, we show how to adapt FOVE to provide exact solutions to Maximum Expected Utility (MEU) queries over relational models for decision under uncertainty. Experimental evaluations show our algorithms to provide significant speedup over the alternatives. ",Extended Lifted Inference with Joint Formulas " Bayes-optimal behavior, while well-defined, is often difficult to achieve. Recent advances in the use of Monte-Carlo tree search (MCTS) have shown that it is possible to act near-optimally in Markov Decision Processes (MDPs) with very large or infinite state spaces. Bayes-optimal behavior in an unknown MDP is equivalent to optimal behavior in the known belief-space MDP, although the size of this belief-space MDP grows exponentially with the amount of history retained, and is potentially infinite. We show how an agent can use one particular MCTS algorithm, Forward Search Sparse Sampling (FSSS), in an efficient way to act nearly Bayes-optimally for all but a polynomial number of steps, assuming that FSSS can be used to act efficiently in any possible underlying MDP. ","Learning is planning: near Bayes-optimal reinforcement learning via Monte-Carlo tree search" " Hierarchical problem abstraction, when applicable, may offer exponential reductions in computational complexity. Previous work on coarse-to-fine dynamic programming (CFDP) has demonstrated this possibility using state abstraction to speed up the Viterbi algorithm. In this paper, we show how to apply temporal abstraction to the Viterbi problem. Our algorithm uses bounds derived from analysis of coarse timescales to prune large parts of the state trellis at finer timescales. We demonstrate improvements of several orders of magnitude over the standard Viterbi algorithm, as well as significant speedups over CFDP, for problems whose state variables evolve at widely differing rates. ",A temporally abstracted Viterbi algorithm " We propose a method called EDML for learning MAP parameters in binary Bayesian networks under incomplete data. The method assumes Beta priors and can be used to learn maximum likelihood parameters when the priors are uninformative. EDML exhibits interesting behaviors, especially when compared to EM. We introduce EDML, explain its origin, and study some of its properties both analytically and empirically. ",EDML: A Method for Learning Parameters in Bayesian Networks " We present a novel approach to constraint-based causal discovery, that takes the form of straightforward logical inference, applied to a list of simple, logical statements about causal relations that are derived directly from observed (in)dependencies. It is both sound and complete, in the sense that all invariant features of the corresponding partial ancestral graph (PAG) are identified, even in the presence of latent variables and selection bias. The approach shows that every identifiable causal relation corresponds to one of just two fundamental forms. More importantly, as the basic building blocks of the method do not rely on the detailed (graphical) structure of the corresponding PAG, it opens up a range of new opportunities, including more robust inference, detailed accountability, and application to large models. ",A Logical Characterization of Constraint-Based Causal Discovery " The problem of learning the structure of Bayesian networks from complete discrete data with a limit on parent set size is considered. Learning is cast explicitly as an optimisation problem where the goal is to find a BN structure which maximises log marginal likelihood (BDe score). Integer programming, specifically the SCIP framework, is used to solve this optimisation problem. Acyclicity constraints are added to the integer program (IP) during solving in the form of cutting planes. Finding good cutting planes is the key to the success of the approach -the search for such cutting planes is effected using a sub-IP. Results show that this is a particularly fast method for exact BN learning. ",Bayesian network learning with cutting planes " When the information about uncertainty cannot be quantified in a simple, probabilistic way, the topic of possibilistic decision theory is often a natural one to consider. The development of possibilistic decision theory has lead to a series of possibilistic criteria, e.g pessimistic possibilistic qualitative utility, possibilistic likely dominance, binary possibilistic utility and possibilistic Choquet integrals. This paper focuses on sequential decision making in possibilistic decision trees. It proposes a complexity study of the problem of finding an optimal strategy depending on the monotonicity property of the optimization criteria which allows the application of dynamic programming that offers a polytime reduction of the decision problem. It also shows that possibilistic Choquet integrals do not satisfy this property, and that in this case the optimization problem is NP - hard. ",On the Complexity of Decision Making in Possibilistic Decision Trees " Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. Several classical probabilistic inference tasks (such as MAP and computing marginals) have not yet received a lot of attention for this formalism. The contribution of this paper is that we develop efficient inference algorithms for these tasks. This is based on a conversion of the probabilistic logic program and the query and evidence to a weighted CNF formula. This allows us to reduce the inference tasks to well-studied tasks such as weighted model counting. To solve such tasks, we employ state-of-the-art methods. We consider multiple methods for the conversion of the programs as well as for inference on the weighted CNF. The resulting approach is evaluated experimentally and shown to improve upon the state-of-the-art in probabilistic logic programming. ",Inference in Probabilistic Logic Programs using Weighted CNF's " The ideas about decision making under ignorance in economics are combined with the ideas about uncertainty representation in computer science. The combination sheds new light on the question of how artificial agents can act in a dynamically consistent manner. The notion of sequential consistency is formalized by adapting the law of iterated expectation for plausibility measures. The necessary and sufficient condition for a certainty equivalence operator for Nehring-Puppe's preference to be sequentially consistent is given. This result sheds light on the models of decision making under uncertainty. ",Dynamic consistency and decision making under vacuous belief " Inference in graphical models consists of repeatedly multiplying and summing out potentials. It is generally intractable because the derived potentials obtained in this way can be exponentially large. Approximate inference techniques such as belief propagation and variational methods combat this by simplifying the derived potentials, typically by dropping variables from them. We propose an alternate method for simplifying potentials: quantizing their values. Quantization causes different states of a potential to have the same value, and therefore introduces context-specific independencies that can be exploited to represent the potential more compactly. We use algebraic decision diagrams (ADDs) to do this efficiently. We apply quantization and ADD reduction to variable elimination and junction tree propagation, yielding a family of bounded approximate inference schemes. Our experimental tests show that our new schemes significantly outperform state-of-the-art approaches on many benchmark instances. ",Approximation by Quantization " Many representation schemes combining first-order logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logical structure into account. We propose the first method that has the full power of both graphical model inference and first-order theorem proving (in finite domains with Herbrand interpretations). We first define probabilistic theorem proving, their generalization, as the problem of computing the probability of a logical formula given the probabilities or weights of a set of formulas. We then show how this can be reduced to the problem of lifted weighted model counting, and develop an efficient algorithm for the latter. We prove the correctness of this algorithm, investigate its properties, and show how it generalizes previous approaches. Experiments show that it greatly outperforms lifted variable elimination when logical structure is present. Finally, we propose an algorithm for approximate probabilistic theorem proving, and show that it can greatly outperform lifted belief propagation. ",Probabilistic Theorem Proving " This paper presents an approach for learning to translate simple narratives, i.e., texts (sequences of sentences) describing dynamic systems, into coherent sequences of events without the need for labeled training data. Our approach incorporates domain knowledge in the form of preconditions and effects of events, and we show that it outperforms state-of-the-art supervised learning systems on the task of reconstructing RoboCup soccer games from their commentaries. ",Reasoning about RoboCup Soccer Narratives " We consider how to use the Bellman residual of the dynamic programming operator to compute suboptimality bounds for solutions to stochastic shortest path problems. Such bounds have been previously established only in the special case that ""all policies are proper,"" in which case the dynamic programming operator is known to be a contraction, and have been shown to be easily computable only in the more limited special case of discounting. Under the condition that transition costs are positive, we show that suboptimality bounds can be easily computed even when not all policies are proper. In the general case when there are no restrictions on transition costs, the analysis is more complex. But we present preliminary results that show such bounds are possible. ",Suboptimality Bounds for Stochastic Shortest Path Problems " We study the problem of agent-based negotiation in combinatorial domains. It is difficult to reach optimal agreements in bilateral or multi-lateral negotiations when the agents' preferences for the possible alternatives are not common knowledge. Self-interested agents often end up negotiating inefficient agreements in such situations. In this paper, we present a protocol for negotiation in combinatorial domains which can lead rational agents to reach optimal agreements under incomplete information setting. Our proposed protocol enables the negotiating agents to identify efficient solutions using distributed search that visits only a small subspace of the whole outcome space. Moreover, the proposed protocol is sufficiently general that it is applicable to most preference representation models in combinatorial domains. We also present results of experiments that demonstrate the feasibility and computational efficiency of our approach. ","An Efficient Protocol for Negotiation over Combinatorial Domains with Incomplete Information" " We present theoretical results in terms of lower and upper bounds on the query complexity of noisy search with comparative feedback. In this search model, the noise in the feedback depends on the distance between query points and the search target. Consequently, the error probability in the feedback is not fixed but varies for the queries posed by the search algorithm. Our results show that a target out of n items can be found in O(log n) queries. We also show the surprising result that for k possible answers per query, the speedup is not log k (as for k-ary search) but only log log k in some cases. ",Noisy Search with Comparative Feedback " Situation calculus has been applied widely in artificial intelligence to model and reason about actions and changes in dynamic systems. Since actions carried out by agents will cause constant changes of the agents' beliefs, how to manage these changes is a very important issue. Shapiro et al. [22] is one of the studies that considered this issue. However, in this framework, the problem of noisy sensing, which often presents in real-world applications, is not considered. As a consequence, noisy sensing actions in this framework will lead to an agent facing inconsistent situation and subsequently the agent cannot proceed further. In this paper, we investigate how noisy sensing actions can be handled in iterated belief change within the situation calculus formalism. We extend the framework proposed in [22] with the capability of managing noisy sensings. We demonstrate that an agent can still detect the actual situation when the ratio of noisy sensing actions vs. accurate sensing actions is limited. We prove that our framework subsumes the iterated belief change strategy in [22] when all sensing actions are accurate. Furthermore, we prove that our framework can adequately handle belief introspection, mistaken beliefs, belief revision and belief update even with noisy sensing, as done in [22] with accurate sensing actions only. ",Belief change with noisy sensing in the situation calculus " Previous work has shown that the problem of learning the optimal structure of a Bayesian network can be formulated as a shortest path finding problem in a graph and solved using A* search. In this paper, we improve the scalability of this approach by developing a memory-efficient heuristic search algorithm for learning the structure of a Bayesian network. Instead of using A*, we propose a frontier breadth-first branch and bound search that leverages the layered structure of the search graph of this problem so that no more than two layers of the graph, plus solution reconstruction information, need to be stored in memory at a time. To further improve scalability, the algorithm stores most of the graph in external memory, such as hard disk, when it does not fit in RAM. Experimental results show that the resulting algorithm solves significantly larger problems than the current state of the art. ","Improving the Scalability of Optimal Bayesian Network Learning with External-Memory Frontier Breadth-First Branch and Bound Search" " In this paper, we develop a qualitative theory of influence diagrams that can be used to model and solve sequential decision making tasks when only qualitative (or imprecise) information is available. Our approach is based on an order-of-magnitude approximation of both probabilities and utilities and allows for specifying partially ordered preferences via sets of utility values. We also propose a dedicated variable elimination algorithm that can be applied for solving order-of-magnitude influence diagrams. ",Order-of-Magnitude Influence Diagrams " To deal with the prohibitive complexity of calculating policies in Decentralized MDPs, researchers have proposed models that exploit structured agent interactions. Settings where most agent actions are independent except for few actions that affect the transitions and/or rewards of other agents can be modeled using Event-Driven Interactions with Complex Rewards (EDI-CR). Finding the optimal joint policy can be formulated as an optimization problem. However, existing formulations are too verbose and/or lack optimality guarantees. We propose a compact Mixed Integer Linear Program formulation of EDI-CR instances. The key insight is that most action sequences of a group of agents have the same effect on a given agent. This allows us to treat these sequences similarly and use fewer variables. Experiments show that our formulation is more compact and leads to faster solution times and better solutions than existing formulations. ","Compact Mathematical Programs For DEC-MDPs With Structured Agent Interactions" " Markov decision processes (MDPs) are widely used in modeling decision making problems in stochastic environments. However, precise specification of the reward functions in MDPs is often very difficult. Recent approaches have focused on computing an optimal policy based on the minimax regret criterion for obtaining a robust policy under uncertainty in the reward function. One of the core tasks in computing the minimax regret policy is to obtain the set of all policies that can be optimal for some candidate reward function. In this paper, we propose an efficient algorithm that exploits the geometric properties of the reward function associated with the policies. We also present an approximate version of the method for further speed up. We experimentally demonstrate that our algorithm improves the performance by orders of magnitude. ",A Geometric Traversal Algorithm for Reward-Uncertain MDPs " Hidden Markov models (HMMs) and conditional random fields (CRFs) are two popular techniques for modeling sequential data. Inference algorithms designed over CRFs and HMMs allow estimation of the state sequence given the observations. In several applications, estimation of the state sequence is not the end goal; instead the goal is to compute some function of it. In such scenarios, estimating the state sequence by conventional inference techniques, followed by computing the functional mapping from the estimate is not necessarily optimal. A more formal approach is to directly infer the final outcome from the observations. In particular, we consider the specific instantiation of the problem where the goal is to find the state trajectories without exact transition points and derive a novel polynomial time inference algorithm that outperforms vanilla inference techniques. We show that this particular problem arises commonly in many disparate applications and present experiments on three of them: (1) Toy robot tracking; (2) Single stroke character recognition; (3) Handwritten word recognition. ",Compressed Inference for Probabilistic Sequential Models " Many real-world decision-theoretic planning problems can be naturally modeled with discrete and continuous state Markov decision processes (DC-MDPs). While previous work has addressed automated decision-theoretic planning for DCMDPs, optimal solutions have only been defined so far for limited settings, e.g., DC-MDPs having hyper-rectangular piecewise linear value functions. In this work, we extend symbolic dynamic programming (SDP) techniques to provide optimal solutions for a vastly expanded class of DCMDPs. To address the inherent combinatorial aspects of SDP, we introduce the XADD - a continuous variable extension of the algebraic decision diagram (ADD) - that maintains compact representations of the exact value function. Empirically, we demonstrate an implementation of SDP with XADDs on various DC-MDPs, showing the first optimal automated solutions to DCMDPs with linear and nonlinear piecewise partitioned value functions and showing the advantages of constraint-based pruning for XADDs. ",Symbolic Dynamic Programming for Discrete and Continuous State MDPs " Identifying and controlling bias is a key problem in empirical sciences. Causal diagram theory provides graphical criteria for deciding whether and how causal effects can be identified from observed (nonexperimental) data by covariate adjustment. Here we prove equivalences between existing as well as new criteria for adjustment and we provide a new simplified but still equivalent notion of d-separation. These lead to efficient algorithms for two important tasks in causal diagram analysis: (1) listing minimal covariate adjustments (with polynomial delay); and (2) identifying the subdiagram involved in biasing paths (in linear time). Our results improve upon existing exponential-time solutions for these problems, enabling users to assess the effects of covariate adjustment on diagrams with tens to hundreds of variables interactively in real time. ",Adjustment Criteria in Causal Diagrams: An Algorithmic Perspective " We present a distributed anytime algorithm for performing MAP inference in graphical models. The problem is formulated as a linear programming relaxation over the edges of a graph. The resulting program has a constraint structure that allows application of the Dantzig-Wolfe decomposition principle. Subprograms are defined over individual edges and can be computed in a distributed manner. This accommodates solutions to graphs whose state space does not fit in memory. The decomposition master program is guaranteed to compute the optimal solution in a finite number of iterations, while the solution converges monotonically with each iteration. Formulating the MAP inference problem as a linear program allows additional (global) constraints to be defined; something not possible with message passing algorithms. Experimental results show that our algorithm's solution quality outperforms most current algorithms and it scales well to large problems. ",Distributed Anytime MAP Inference " Approximate Bayesian inference is NP-hard. Dagum and Luby defined the Local Variance Bound (LVB) to measure the approximation hardness of Bayesian inference on Bayesian networks, assuming the networks model strictly positive joint probability distributions, i.e. zero probabilities are not permitted. This paper introduces the k-test to measure the approximation hardness of inference on Bayesian networks with deterministic causalities in the probability distribution, i.e. when zero conditional probabilities are permitted. Approximation by stochastic sampling is a widely-used inference method that is known to suffer from inefficiencies due to sample rejection. The k-test predicts when rejection rates of stochastic sampling a Bayesian network will be low, modest, high, or when sampling is intractable. ","Measuring the Hardness of Stochastic Sampling on Bayesian Networks with Deterministic Causalities: the k-Test" " This paper investigates a new method for improving the learning algorithm of Mixture of Experts (ME) model using a hybrid of Modified Cuckoo Search (MCS) and Conjugate Gradient (CG) as a second order optimization technique. The CG technique is combined with Back-Propagation (BP) algorithm to yield a much more efficient learning algorithm for ME structure. In addition, the experts and gating networks in enhanced model are replaced by CG based Multi-Layer Perceptrons (MLPs) to provide faster and more accurate learning. The CG is considerably depends on initial weights of connections of Artificial Neural Network (ANN), so, a metaheuristic algorithm, the so-called Modified Cuckoo Search is applied in order to select the optimal weights. The performance of proposed method is compared with Gradient Decent Based ME (GDME) and Conjugate Gradient Based ME (CGME) in classification and regression problems. The experimental results show that hybrid MSC and CG based ME (MCS-CGME) has faster convergence and better performance in utilized benchmark data sets. ","Extended Mixture of MLP Experts by Hybrid of Conjugate Gradient Method and Modified Cuckoo Search" " Spectrum sensing is a fundamental problem in cognitive radio. We propose a function of covariance matrix based detection algorithm for spectrum sensing in cognitive radio network. Monotonically increasing property of function of matrix involving trace operation is utilized as the cornerstone for this algorithm. The advantage of proposed algorithm is it works under extremely low signal-to-noise ratio, like lower than -30 dB with limited sample data. Theoretical analysis of threshold setting for the algorithm is discussed. A performance comparison between the proposed algorithm and other state-of-the-art methods is provided, by the simulation on captured digital television (DTV) signal. ","Generalized FMD Detection for Spectrum Sensing Under Low Signal-to-Noise Ratio" " The purpose of statistical disclosure control (SDC) of microdata, a.k.a. data anonymization or privacy-preserving data mining, is to publish data sets containing the answers of individual respondents in such a way that the respondents corresponding to the released records cannot be re-identified and the released data are analytically useful. SDC methods are either based on masking the original data, generating synthetic versions of them or creating hybrid versions by combining original and synthetic data. The choice of SDC methods for categorical data, especially nominal data, is much smaller than the choice of methods for numerical data. We mitigate this problem by introducing a numerical mapping for hierarchical nominal data which allows computing means, variances and covariances on them. ","Marginality: a numerical mapping for enhanced treatment of nominal and hierarchical attributes" " The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI. ",One Decade of Universal Artificial Intelligence " Relational representations in reinforcement learning allow for the use of structural information like the presence of objects and relationships between them in the description of value functions. Through this paper, we show that such representations allow for the inclusion of background knowledge that qualitatively describes a state and can be used to design agents that demonstrate learning behavior in domains with large state and actions spaces such as computer games. ",Relational Reinforcement Learning in Infinite Mario " In a world where communication and information sharing are at the heart of our business, the terminology needs are most pressing. It has become imperative to identify the terms used and defined in a consensual and coherent way while preserving linguistic diversity. To streamline and strengthen the process of acquisition, representation and exploitation of scenarii of train accidents, it is necessary to harmonize and standardize the terminology used by players in the security field. The research aims to significantly improve analytical activities and operations of the various safety studies, by tracking the error in system, hardware, software and human. This paper presents the contribution of ontology to modeling scenarii for rail accidents through a knowledge model based on a generic ontology and domain ontology. After a detailed presentation of the state of the art material, this article presents the first results of the developed model. ","Development of an Ontology to Assist the Modeling of Accident Scenarii ""Application on Railroad Transport """ " The ability to model search in a constraint solver can be an essential asset for solving combinatorial problems. However, existing infrastructure for defining search heuristics is often inadequate. Either modeling capabilities are extremely limited or users are faced with a general-purpose programming language whose features are not tailored towards writing search heuristics. As a result, major improvements in performance may remain unexplored. This article introduces search combinators, a lightweight and solver-independent method that bridges the gap between a conceptually simple modeling language for search (high-level, functional and naturally compositional) and an efficient implementation (low-level, imperative and highly non-modular). By allowing the user to define application-tailored search strategies from a small set of primitives, search combinators effectively provide a rich domain-specific language (DSL) for modeling search to the user. Remarkably, this DSL comes at a low implementation cost to the developer of a constraint solver. The article discusses two modular implementation approaches and shows, by empirical evaluation, that search combinators can be implemented without overhead compared to a native, direct implementation in a constraint solver. ",Search Combinators " In Multi-Source Feedback or 360 Degree Feedback, data on the performance of an individual are collected systematically from a number of stakeholders and are used for improving performance. The 360-Degree Feedback approach provides a consistent management philosophy meeting the criterion outlined previously. The 360-degree feedback appraisal process describes a human resource methodology that is frequently used for both employee appraisal and employee development. Used in employee performance appraisals, the 360-degree feedback methodology is differentiated from traditional, top-down appraisal methods in which the supervisor responsible for the appraisal provides the majority of the data. Instead it seeks to use information gained from other sources to provide a fuller picture of employees' performances. Similarly, when this technique used in employee development it augments employees' perceptions of training needs with those of the people with whom they interact. The 360-degree feedback based appraisal is a comprehensive method where in the feedback about the employee comes from all the sources that come into contact with the employee on his/her job. The respondents for an employee can be her/his peers, managers, subordinates team members, customers, suppliers and vendors. Hence anyone who comes into contact with the employee, the 360 degree appraisal has four components that include self-appraisal, superior's appraisal, subordinate's appraisal student's appraisal and peer's appraisal .The proposed system is an attempt to implement the 360 degree feedback based appraisal system in academics especially engineering colleges. ","Multi source feedback based performance appraisal system using Fuzzy logic decision support system" " We propose a simple method for combining together voting rules that performs a run-off between the different winners of each voting rule. We prove that this combinator has several good properties. For instance, even if just one of the base voting rules has a desirable property like Condorcet consistency, the combination inherits this property. In addition, we prove that combining voting rules together in this way can make finding a manipulation more computationally difficult. Finally, we study the impact of this combinator on approximation methods that find close to optimal manipulations. ",Combining Voting Rules Together " Languages for open-universe probabilistic models (OUPMs) can represent situations with an unknown number of objects and iden- tity uncertainty. While such cases arise in a wide range of important real-world appli- cations, existing general purpose inference methods for OUPMs are far less efficient than those available for more restricted lan- guages and model classes. This paper goes some way to remedying this deficit by in- troducing, and proving correct, a generaliza- tion of Gibbs sampling to partial worlds with possibly varying model structure. Our ap- proach draws on and extends previous generic OUPM inference methods, as well as aux- iliary variable samplers for nonparametric mixture models. It has been implemented for BLOG, a well-known OUPM language. Combined with compile-time optimizations, the resulting algorithm yields very substan- tial speedups over existing methods on sev- eral test cases, and substantially improves the practicality of OUPM languages generally. ",Gibbs Sampling in Open-Universe Stochastic Languages " Qualitative possibilistic networks, also known as min-based possibilistic networks, are important tools for handling uncertain information in the possibility theory frame- work. Despite their importance, only the junction tree adaptation has been proposed for exact reasoning with such networks. This paper explores alternative algorithms using compilation techniques. We first propose possibilistic adaptations of standard compilation-based probabilistic methods. Then, we develop a new, purely possibilistic, method based on the transformation of the initial network into a possibilistic base. A comparative study shows that this latter performs better than the possibilistic adap- tations of probabilistic methods. This result is also confirmed by experimental results. ","Compiling Possibilistic Networks: Alternative Approaches to Possibilistic Inference" " Possibilistic answer set programming (PASP) extends answer set programming (ASP) by attaching to each rule a degree of certainty. While such an extension is important from an application point of view, existing semantics are not well-motivated, and do not always yield intuitive results. To develop a more suitable semantics, we first introduce a characterization of answer sets of classical ASP programs in terms of possibilistic logic where an ASP program specifies a set of constraints on possibility distributions. This characterization is then naturally generalized to define answer sets of PASP programs. We furthermore provide a syntactic counterpart, leading to a possibilistic generalization of the well-known Gelfond-Lifschitz reduct, and we show how our framework can readily be implemented using standard ASP solvers. ",Possibilistic Answer Set Programming Revisited " Performing sensitivity analysis for influence diagrams using the decision circuit framework is particularly convenient, since the partial derivatives with respect to every parameter are readily available [Bhattacharjya and Shachter, 2007; 2008]. In this paper we present three non-linear sensitivity analysis methods that utilize this partial derivative information and therefore do not require re-evaluating the decision situation multiple times. Specifically, we show how to efficiently compare strategies in decision situations, perform sensitivity to risk aversion and compute the value of perfect hedging [Seyller, 2008]. ",Three new sensitivity analysis methods for influence diagrams " Many machine learning applications require the ability to learn from and reason about noisy multi-relational data. To address this, several effective representations have been developed that provide both a language for expressing the structural regularities of a domain, and principled support for probabilistic inference. In addition to these two aspects, however, many applications also involve a third aspect-the need to reason about similarities-which has not been directly supported in existing frameworks. This paper introduces probabilistic similarity logic (PSL), a general-purpose framework for joint reasoning about similarity in relational domains that incorporates probabilistic reasoning about similarities and relational structure in a principled way. PSL can integrate any existing domain-specific similarity measures and also supports reasoning about similarities between sets of entities. We provide efficient inference and learning techniques for PSL and demonstrate its effectiveness both in common relational tasks and in settings that require reasoning about similarity. ",Probabilistic Similarity Logic " The Next Generation Air Transportation System will introduce new, advanced sensor technologies into the cockpit. With the introduction of such systems, the responsibilities of the pilot are expected to dramatically increase. In the ALARMS (ALerting And Reasoning Management System) project for NASA, we focus on a key challenge of this environment, the quick and efficient handling of aircraft sensor alerts. It is infeasible to alert the pilot on the state of all subsystems at all times. Furthermore, there is uncertainty as to the true hazard state despite the evidence of the alerts, and there is uncertainty as to the effect and duration of actions taken to address these alerts. This paper reports on the first steps in the construction of an application designed to handle Next Generation alerts. In ALARMS, we have identified 60 different aircraft subsystems and 20 different underlying hazards. In this paper, we show how a Bayesian network can be used to derive the state of the underlying hazards, based on the sensor input. Then, we propose a framework whereby an automated system can plan to address these hazards in cooperation with the pilot, using a Time-Dependent Markov Process (TMDP). Different hazards and pilot states will call for different alerting automation plans. We demonstrate this emerging application of Bayesian networks and TMDPs to cockpit automation, for a use case where a small number of hazards are present, and analyze the resulting alerting automation policies. ","ALARMS: Alerting and Reasoning Management System for Next Generation Aircraft Hazards" " Relational Continuous Models (RCMs) represent joint probability densities over attributes of objects, when the attributes have continuous domains. With relational representations, they can model joint probability distributions over large numbers of variables compactly in a natural way. This paper presents a new exact lifted inference algorithm for RCMs, thus it scales up to large models of real world applications. The algorithm applies to Relational Pairwise Models which are (relational) products of potentials of arity 2. Our algorithm is unique in two ways. First, it substantially improves the efficiency of lifted inference with variables of continuous domains. When a relational model has Gaussian potentials, it takes only linear-time compared to cubic time of previous methods. Second, it is the first exact inference algorithm which handles RCMs in a lifted way. The algorithm is illustrated over an example from econometrics. Experimental results show that our algorithm outperforms both a groundlevel inference algorithm and an algorithm built with previously-known lifted methods. ",Lifted Inference for Relational Continuous Models " We propose a new point-based method for approximate planning in Dec-POMDP which outperforms the state-of-the-art approaches in terms of solution quality. It uses a heuristic estimation of the prior probability of beliefs to choose a bounded number of policy trees: this choice is formulated as a combinatorial optimisation problem minimising the error induced by pruning. ",Distribution over Beliefs for Memory Bounded Dec-POMDP Planning " Partially-Observable Markov Decision Processes (POMDPs) are typically solved by finding an approximate global solution to a corresponding belief-MDP. In this paper, we offer a new planning algorithm for POMDPs with continuous state, action and observation spaces. Since such domains have an inherent notion of locality, we can find an approximate solution using local optimization methods. We parameterize the belief distribution as a Gaussian mixture, and use the Extended Kalman Filter (EKF) to approximate the belief update. Since the EKF is a first-order filter, we can marginalize over the observations analytically. By using feedback control and state estimation during policy execution, we recover a behavior that is effectively conditioned on incoming observations despite the unconditioned planning. Local optimization provides no guarantees of global optimality, but it allows us to tackle domains that are at least an order of magnitude larger than the current state-of-the-art. We demonstrate the scalability of our algorithm by considering a simulated hand-eye coordination domain with 16 continuous state dimensions and 6 continuous action dimensions. ","A Scalable Method for Solving High-Dimensional Continuous POMDPs Using Local Approximation" " Computing the probability of a formula given the probabilities or weights associated with other formulas is a natural extension of logical inference to the probabilistic setting. Surprisingly, this problem has received little attention in the literature to date, particularly considering that it includes many standard inference problems as special cases. In this paper, we propose two algorithms for this problem: formula decomposition and conditioning, which is an exact method, and formula importance sampling, which is an approximate method. The latter is, to our knowledge, the first application of model counting to approximate probabilistic inference. Unlike conventional variable-based algorithms, our algorithms work in the dual realm of logical formulas. Theoretically, we show that our algorithms can greatly improve efficiency by exploiting the structural information in the formulas. Empirically, we show that they are indeed quite powerful, often achieving substantial performance gains over state-of-the-art schemes. ",Formula-Based Probabilistic Inference " Decentralized POMDPs provide an expressive framework for multi-agent sequential decision making. While fnite-horizon DECPOMDPs have enjoyed signifcant success, progress remains slow for the infnite-horizon case mainly due to the inherent complexity of optimizing stochastic controllers representing agent policies. We present a promising new class of algorithms for the infnite-horizon case, which recasts the optimization problem as inference in a mixture of DBNs. An attractive feature of this approach is the straightforward adoption of existing inference techniques in DBNs for solving DEC-POMDPs and supporting richer representations such as factored or continuous states and actions. We also derive the Expectation Maximization (EM) algorithm to optimize the joint policy represented as DBNs. Experiments on benchmark domains show that EM compares favorably against the state-of-the-art solvers. ",Anytime Planning for Decentralized POMDPs using Expectation Maximization " We describe a framework and an algorithm for solving hybrid influence diagrams with discrete, continuous, and deterministic chance variables, and discrete and continuous decision variables. A continuous chance variable in an influence diagram is said to be deterministic if its conditional distributions have zero variances. The solution algorithm is an extension of Shenoy's fusion algorithm for discrete influence diagrams. We describe an extended Shenoy-Shafer architecture for propagation of discrete, continuous, and utility potentials in hybrid influence diagrams that include deterministic chance variables. The algorithm and framework are illustrated by solving two small examples. ",Solving Hybrid Influence Diagrams with Deterministic Variables " The paper introduces k-bounded MAP inference, a parameterization of MAP inference in Markov logic networks. k-Bounded MAP states are MAP states with at most k active ground atoms of hidden (non-evidence) predicates. We present a novel delayed column generation algorithm and provide empirical evidence that the algorithm efficiently computes k-bounded MAP states for meaningful real-world graph matching problems. The underlying idea is that, instead of solving one large optimization problem, it is often more efficient to tackle several small ones. ","A Delayed Column Generation Strategy for Exact k-Bounded MAP Inference in Markov Logic Networks" " Rollating walkers are popular mobility aids used by older adults to improve balance control. There is a need to automatically recognize the activities performed by walker users to better understand activity patterns, mobility issues and the context in which falls are more likely to happen. We design and compare several techniques to recognize walker related activities. A comprehensive evaluation with control subjects and walker users from a retirement community is presented. ","Comparative Analysis of Probabilistic Models for Activity Recognition with an Instrumented Walker" " Belief merging is an important but difficult problem in Artificial Intelligence, especially when sources of information are pervaded with uncertainty. Many merging operators have been proposed to deal with this problem in possibilistic logic, a weighted logic which is powerful for handling inconsistency and deal- ing with uncertainty. They often result in a possibilistic knowledge base which is a set of weighted formulas. Although possibilistic logic is inconsistency tolerant, it suers from the well-known ""drowning effect"". Therefore, we may still want to obtain a consistent possi- bilistic knowledge base as the result of merg- ing. In such a case, we argue that it is not always necessary to keep weighted informa- tion after merging. In this paper, we define a merging operator that maps a set of pos- sibilistic knowledge bases and a formula rep- resenting the integrity constraints to a clas- sical knowledge base by using lexicographic ordering. We show that it satisfies nine pos- tulates that generalize basic postulates for propositional merging given in [11]. These postulates capture the principle of minimal change in some sense. We then provide an algorithm for generating the resulting knowl- edge base of our merging operator. Finally, we discuss the compatibility of our merging operator with propositional merging and es- tablish the advantage of our merging opera- tor over existing semantic merging operators in the propositional case. ","Merging Knowledge Bases in Possibilistic Logic by Lexicographic Aggregation" " The standard coherence criterion for lower previsions is expressed using an infinite number of linear constraints. For lower previsions that are essentially defined on some finite set of gambles on a finite possibility space, we present a reformulation of this criterion that only uses a finite number of constraints. Any such lower prevision is coherent if it lies within the convex polytope defined by these constraints. The vertices of this polytope are the extreme coherent lower previsions for the given set of gambles. Our reformulation makes it possible to compute them. We show how this is done and illustrate the procedure and its results. ","Characterizing the Set of Coherent Lower Previsions with a Finite Number of Constraints or Vertices" " Decision circuits perform efficient evaluation of influence diagrams, building on the ad- vances in arithmetic circuits for belief net- work inference [Darwiche, 2003; Bhattachar- jya and Shachter, 2007]. We show how even more compact decision circuits can be con- structed for dynamic programming in influ- ence diagrams with separable value functions and conditionally independent subproblems. Once a decision circuit has been constructed based on the diagram's ""global"" graphical structure, it can be compiled to exploit ""lo- cal"" structure for efficient evaluation and sen- sitivity analysis. ",Dynamic programming in in uence diagrams with decision circuits " In this paper, we present the Difference- Based Causality Learner (DBCL), an algorithm for learning a class of discrete-time dynamic models that represents all causation across time by means of difference equations driving change in a system. We motivate this representation with real-world mechanical systems and prove DBCL's correctness for learning structure from time series data, an endeavour that is complicated by the existence of latent derivatives that have to be detected. We also prove that, under common assumptions for causal discovery, DBCL will identify the presence or absence of feedback loops, making the model more useful for predicting the effects of manipulating variables when the system is in equilibrium. We argue analytically and show empirically the advantages of DBCL over vector autoregression (VAR) and Granger causality models as well as modified forms of Bayesian and constraintbased structure discovery algorithms. Finally, we show that our algorithm can discover causal directions of alpha rhythms in human brains from EEG data. ",Learning Why Things Change: The Difference-Based Causality Learner " We present decentralized rollout sampling policy iteration (DecRSPI) - a new algorithm for multi-agent decision problems formalized as DEC-POMDPs. DecRSPI is designed to improve scalability and tackle problems that lack an explicit model. The algorithm uses Monte- Carlo methods to generate a sample of reachable belief states. Then it computes a joint policy for each belief state based on the rollout estimations. A new policy representation allows us to represent solutions compactly. The key benefits of the algorithm are its linear time complexity over the number of agents, its bounded memory usage and good solution quality. It can solve larger problems that are intractable for existing planning algorithms. Experimental results confirm the effectiveness and scalability of the approach. ",Rollout Sampling Policy Iteration for Decentralized POMDPs " A branch-and-bound approach to solving influ- ence diagrams has been previously proposed in the literature, but appears to have never been implemented and evaluated - apparently due to the difficulties of computing effective bounds for the branch-and-bound search. In this paper, we describe how to efficiently compute effective bounds, and we develop a practical implementa- tion of depth-first branch-and-bound search for influence diagram evaluation that outperforms existing methods for solving influence diagrams with multiple stages. ",Solving Multistage Influence Diagrams using Branch-and-Bound Search " Despite the intractability of generic optimal partially observable Markov decision process planning, there exist important problems that have highly structured models. Previous researchers have used this insight to construct more efficient algorithms for factored domains, and for domains with topological structure in the flat state dynamics model. In our work, motivated by findings from the education community relevant to automated tutoring, we consider problems that exhibit a form of topological structure in the factored dynamics model. Our Reachable Anytime Planner for Imprecisely-sensed Domains (RAPID) leverages this structure to efficiently compute a good initial envelope of reachable states under the optimal MDP policy in time linear in the number of state variables. RAPID performs partially-observable planning over the limited envelope of states, and slowly expands the state space considered as time allows. RAPID performs well on a large tutoring-inspired problem simulation with 122 state variables, corresponding to a flat state space of over 10^30 states. ",RAPID: A Reachable Anytime Planner for Imprecisely-sensed Domains " UCT has recently emerged as an exciting new adversarial reasoning technique based on cleverly balancing exploration and exploitation in a Monte-Carlo sampling setting. It has been particularly successful in the game of Go but the reasons for its success are not well understood and attempts to replicate its success in other domains such as Chess have failed. We provide an in-depth analysis of the potential of UCT in domain-independent settings, in cases where heuristic values are available, and the effect of enhancing random playouts to more informed playouts between two weak minimax players. To provide further insights, we develop synthetic game tree instances and discuss interesting properties of UCT, both empirically and analytically. ",Understanding Sampling Style Adversarial Search Methods " Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya's PRISM, Poole's ICL, De Raedt et al's ProbLog and Vennekens et al's LPAD, combines statistical and logical knowledge representation and inference. Inference in these languages is based on enumerative construction of proofs over logic programs. Consequently, these languages permit very limited use of random variables with continuous distributions. In this paper, we extend PRISM with Gaussian random variables and linear equality constraints, and consider the problem of parameter learning in the extended language. Many statistical models such as finite mixture models and Kalman filter can be encoded in extended PRISM. Our EM-based learning algorithm uses a symbolic inference procedure that represents sets of derivations without enumeration. This permits us to learn the distribution parameters of extended PRISM programs with discrete as well as Gaussian variables. The learning algorithm naturally generalizes the ones used for PRISM and Hybrid Bayesian Networks. ",Parameter Learning in PRISM Programs with Continuous Random Variables " Decision making whenever and wherever it is happened is key to organizations success. In order to make correct decision, individuals, teams and organizations need both knowledge management (to manage content) and collaboration (to manage group processes) to make that more effective and efficient. In this paper, we explain the knowledge management and collaboration convergence. Then, we propose a formal description of mixed and multimodal decision making (MDM) process where decision may be made by three possible modes: individual, collective or hybrid. Finally, we explicit the MDM process based on UML-G profile. ",Modeling of Mixed Decision Making Process " We consider infinite-horizon $\gamma$-discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. We consider the algorithm Value Iteration and the sequence of policies $\pi_1,...,\pi_k$ it implicitely generates until some iteration $k$. We provide performance bounds for non-stationary policies involving the last $m$ generated policies that reduce the state-of-the-art bound for the last stationary policy $\pi_k$ by a factor $\frac{1-\gamma}{1-\gamma^m}$. In particular, the use of non-stationary policies allows to reduce the usual asymptotic performance bounds of Value Iteration with errors bounded by $\epsilon$ at each iteration from $\frac{\gamma}{(1-\gamma)^2}\epsilon$ to $\frac{\gamma}{1-\gamma}\epsilon$, which is significant in the usual situation when $\gamma$ is close to 1. Given Bellman operators that can only be computed with some error $\epsilon$, a surprising consequence of this result is that the problem of ""computing an approximately optimal non-stationary policy"" is much simpler than that of ""computing an approximately optimal stationary policy"", and even slightly simpler than that of ""approximately computing the value of some fixed policy"", since this last problem only has a guarantee of $\frac{1}{1-\gamma}\epsilon$. ","On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes" " Informledge System (ILS) is a knowledge network with autonomous nodes and intelligent links that integrate and structure the pieces of knowledge. In this paper, we aim to put forward the link dynamics involved in intelligent processing of information in ILS. There has been advancement in knowledge management field which involve managing information in databases from a single domain. ILS works with information from multiple domains stored in distributed way in the autonomous nodes termed as Knowledge Network Node (KNN). Along with the concept under consideration, KNNs store the processed information linking concepts and processors leading to the appropriate processing of information. ","Creating Intelligent Linking for Information Threading in Knowledge Networks" " Expert systems use human knowledge often stored as rules within the computer to solve problems that generally would entail human intelligence. Today, with information systems turning out to be more pervasive and with the myriad advances in information technologies, automating computer fault diagnosis is becoming so fundamental that soon every enterprise has to endorse it. This paper proposes an expert system called Expert PC Troubleshooter for diagnosing computer problems. The system is composed of a user interface, a rule-base, an inference engine, and an expert interface. Additionally, the system features a fuzzy-logic module to troubleshoot POST beep errors, and an intelligent agent that assists in the knowledge acquisition process. The proposed system is meant to automate the maintenance, repair, and operations (MRO) process, and free-up human technicians from manually performing routine, laborious, and timeconsuming maintenance tasks. As future work, the proposed system is to be parallelized so as to boost its performance and speed-up its various operations. ",Expert PC Troubleshooter With Fuzzy-Logic And Self-Learning Support " Some aspects of the result of applying unit resolution on a CNF formula can be formalized as functions with domain a set of partial truth assignments. We are interested in two ways for computing such functions, depending on whether the result is the production of the empty clause or the assignment of a variable with a given truth value. We show that these two models can compute the same functions with formulae of polynomially related sizes, and we explain how this result is related to the CNF encoding of Boolean constraints. ",Unit contradiction versus unit propagation " The development of expert system for treatment of Diabetes disease by using natural methods is new information technology derived from Artificial Intelligent research using ESTA (Expert System Text Animation) System. The proposed expert system contains knowledge about various methods of natural treatment methods (Massage, Herbal/Proper Nutrition, Acupuncture, Gems) for Diabetes diseases of Human Beings. The system is developed in the ESTA (Expert System shell for Text Animation) which is Visual Prolog 7.3 Application. The knowledge for the said system will be acquired from domain experts, texts and other related sources. ","Development of knowledge Base Expert System for Natural treatment of Diabetes disease" " In this report, we will be interested at Dynamic Bayesian Network (DBNs) as a model that tries to incorporate temporal dimension with uncertainty. We start with basics of DBN where we especially focus in Inference and Learning concepts and algorithms. Then we will present different levels and methods of creating DBNs as well as approaches of incorporating temporal dimension in static Bayesian network. ",Characterization of Dynamic Bayesian Network " Through history, the human being tried to relay its daily tasks to other creatures, which was the main reason behind the rise of civilizations. It started with deploying animals to automate tasks in the field of agriculture(bulls), transportation (e.g. horses and donkeys), and even communication (pigeons). Millenniums after, come the Golden age with ""Al-jazari"" and other Muslim inventors, which were the pioneers of automation, this has given birth to industrial revolution in Europe, centuries after. At the end of the nineteenth century, a new era was to begin, the computational era, the most advanced technological and scientific development that is driving the mankind and the reason behind all the evolutions of science; such as medicine, communication, education, and physics. At this edge of technology engineers and scientists are trying to model a machine that behaves the same as they do, which pushed us to think about designing and implementing ""Things that-Thinks"", then artificial intelligence was. In this work we will cover each of the major discoveries and studies in the field of machine cognition, which are the ""Elementary Perceiver and Memorizer""(EPAM) and ""The General Problem Solver""(GPS). The First one focus mainly on implementing the human-verbal learning behavior, while the second one tries to model an architecture that is able to solve problems generally (e.g. theorem proving, chess playing, and arithmetic). We will cover the major goals and the main ideas of each model, as well as comparing their strengths and weaknesses, and finally giving their fields of applications. And Finally, we will suggest a real life implementation of a cognitive machine. ",Machine Cognition Models: EPAM and GPS " We present a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities (STA) detected on video frames. The output is a set of recognised long-term activities (LTA), which are pre-defined temporal combinations of STA. The constraints on the STA that, if satisfied, lead to the recognition of a LTA, have been expressed using a dialect of the Event Calculus. In order to handle the uncertainty that naturally occurs in human activity recognition, we adapted this dialect to a state-of-the-art probabilistic logic programming framework. We present a detailed evaluation and comparison of the crisp and probabilistic approaches through experimentation on a benchmark dataset of human surveillance videos. ",A Probabilistic Logic Programming Event Calculus " The article discusses some applications of fuzzy logic ideas to formalizing of the Case-Based Reasoning (CBR) process and to measuring the effectiveness of CBR systems ",Applications of fuzzy logic to Case-Based Reasoning " One of the big challenges in the development of probabilistic relational (or probabilistic logical) modeling and learning frameworks is the design of inference techniques that operate on the level of the abstract model representation language, rather than on the level of ground, propositional instances of the model. Numerous approaches for such ""lifted inference"" techniques have been proposed. While it has been demonstrated that these techniques will lead to significantly more efficient inference on some specific models, there are only very recent and still quite restricted results that show the feasibility of lifted inference on certain syntactically defined classes of models. Lower complexity bounds that imply some limitations for the feasibility of lifted inference on more expressive model classes were established early on in (Jaeger 2000). However, it is not immediate that these results also apply to the type of modeling languages that currently receive the most attention, i.e., weighted, quantifier-free formulas. In this paper we extend these earlier results, and show that under the assumption that NETIME =/= ETIME, there is no polynomial lifted inference algorithm for knowledge bases of weighted, quantifier- and function-free formulas. Further strengthening earlier results, this is also shown to hold for approximate inference, and for knowledge bases not containing the equality predicate. ",Lower Complexity Bounds for Lifted Inference " Statistical evidence of the influence of neighborhood topology on the performance of particle swarm optimization (PSO) algorithms has been shown in many works. However, little has been done about the implications could have the percolation threshold in determining the topology of this neighborhood. This work addresses this problem for individuals that, like robots, are able to sense in a limited neighborhood around them. Based on the concept of percolation threshold, and more precisely, the disk percolation model in 2D, we show that better results are obtained for low values of radius, when individuals occasionally ask others their best visited positions, with the consequent decrease of computational complexity. On the other hand, since percolation threshold is a universal measure, it could have a great interest to compare the performance of different hybrid PSO algorithms. ",On how percolation threshold affects PSO performance " The solution of the biobjective IRP is rather challenging, even for metaheuristics. We are still lacking a profound understanding of appropriate solution representations and effective neighborhood structures. Clearly, both the delivery volumes and the routing aspects of the alternatives need to be reflected in an encoding, and must be modified when searching by means of local search. Our work contributes to the better understanding of such solution representations. On the basis of an experimental investigation, the advantages and drawbacks of two encodings are studied and compared. ","Solution Representations and Local Search for the bi-objective Inventory Routing Problem" " Today, one's disposes of large datasets composed of thousands of geographic objects. However, for many processes, which require the appraisal of an expert or much computational time, only a small part of these objects can be taken into account. In this context, robust sampling methods become necessary. In this paper, we propose a sampling method based on clustering techniques. Our method consists in dividing the objects in clusters, then in selecting in each cluster, the most representative objects. A case-study in the context of a process dedicated to knowledge revision for geographic data generalisation is presented. This case-study shows that our method allows to select relevant samples of objects. ",Automatic Sampling of Geographic objects " Both humans and artificial systems frequently use trial and error methods to problem solving. In order to be effective, this type of strategy implies having high quality control knowledge to guide the quest for the optimal solution. Unfortunately, this control knowledge is rarely perfect. Moreover, in artificial systems-as in humans-self-evaluation of one's own knowledge is often difficult. Yet, this self-evaluation can be very useful to manage knowledge and to determine when to revise it. The objective of our work is to propose an automated approach to evaluate the quality of control knowledge in artificial systems based on a specific trial and error strategy, namely the informed tree search strategy. Our revision approach consists in analysing the system's execution logs, and in using the belief theory to evaluate the global quality of the knowledge. We present a real-world industrial application in the form of an experiment using this approach in the domain of cartographic generalisation. Thus far, the results of using our approach have been encouraging. ","Using Belief Theory to Diagnose Control Knowledge Quality. Application to cartographic generalisation" " In this paper we develop a fuzzy model for the description of the process of Analogical Reasoning by representing its main steps as fuzzy subsets of a set of linguistic labels characterizing the individuals' performance in each step and we use the Shannon- Wiener diversity index as a measure of the individuals' abilities in analogical problem solving. This model is compared with a stochastic model presented in author's earlier papers by introducing a finite Markov chain on the steps of the process of Analogical Reasoning. A classroom experiment is also presented to illustrate the use of our results in practice. ",A Fuzzy Model for Analogical Problem Solving " Without Linked Data, transport data is limited to applications exclusively around transport. In this paper, we present a workflow for publishing and linking transport data on the Web. So we will be able to develop transport applications and to add other features which will be created from other datasets. This will be possible because transport data will be linked to these datasets. We apply this workflow to two datasets: NEPTUNE, a French standard describing a transport line, and Passim, a directory containing relevant information on transport services, in every French city. ",Publishing and linking transport data on the Web " Attribute reduction is viewed as an important preprocessing step for pattern recognition and data mining. Most of researches are focused on attribute reduction by using rough sets. Recently, Tsang et al. discussed attribute reduction with covering rough sets in the paper [E. C.C. Tsang, D. Chen, Daniel S. Yeung, Approximations and reducts with covering generalized rough sets, Computers and Mathematics with Applications 56 (2008) 279-289], where an approach based on discernibility matrix was presented to compute all attribute reducts. In this paper, we provide an improved approach by constructing simpler discernibility matrix with covering rough sets, and then proceed to improve some characterizations of attribute reduction provided by Tsang et al. It is proved that the improved discernible matrix is equivalent to the old one, but the computational complexity of discernible matrix is greatly reduced. ",An improved approach to attribute reduction with covering rough sets " This is the Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, which was held in Barcelona, Spain, July 14 - 17 2011. ","Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence (2011)" " This is the Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, which was held on Catalina Island, CA, July 8 - 11 2010. ","Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (2010)" " Most Relevant Explanation (MRE) is a method for finding multivariate explanations for given evidence in Bayesian networks [12]. This paper studies the theoretical properties of MRE and develops an algorithm for finding multiple top MRE solutions. Our study shows that MRE relies on an implicit soft relevance measure in automatically identifying the most relevant target variables and pruning less relevant variables from an explanation. The soft measure also enables MRE to capture the intuitive phenomenon of explaining away encoded in Bayesian networks. Furthermore, our study shows that the solution space of MRE has a special lattice structure which yields interesting dominance relations among the solutions. A K-MRE algorithm based on these dominance relations is developed for generating a set of top solutions that are more representative. Our empirical results show that MRE methods are promising approaches for explanation in Bayesian networks. ","Most Relevant Explanation: Properties, Algorithms, and Evaluations" " This paper develops an inconsistency measure on conditional probabilistic knowledge bases. The measure is based on fundamental principles for inconsistency measures and thus provides a solid theoretical framework for the treatment of inconsistencies in probabilistic expert systems. We illustrate its usefulness and immediate application on several examples and present some formal results. Building on this measure we use the Shapley value-a well-known solution for coalition games-to define a sophisticated indicator that is not only able to measure inconsistencies but to reveal the causes of inconsistencies in the knowledge base. Altogether these tools guide the knowledge engineer in his aim to restore consistency and therefore enable him to build a consistent and usable knowledge base that can be employed in probabilistic expert systems. ",Measuring Inconsistency in Probabilistic Knowledge Bases " There has been a great deal of recent interest in methods for performing lifted inference; however, most of this work assumes that the first-order model is given as input to the system. Here, we describe lifted inference algorithms that determine symmetries and automatically lift the probabilistic model to speedup inference. In particular, we describe approximate lifted inference techniques that allow the user to trade off inference accuracy for computational efficiency by using a handful of tunable parameters, while keeping the error bounded. Our algorithms are closely related to the graph-theoretic concept of bisimulation. We report experiments on both synthetic and real data to show that in the presence of symmetries, run-times for inference can be improved significantly, with approximate lifted inference providing orders of magnitude speedup over ground inference. ",Bisimulation-based Approximate Lifted Inference " The specification of aMarkov decision process (MDP) can be difficult. Reward function specification is especially problematic; in practice, it is often cognitively complex and time-consuming for users to precisely specify rewards. This work casts the problem of specifying rewards as one of preference elicitation and aims to minimize the degree of precision with which a reward function must be specified while still allowing optimal or near-optimal policies to be produced. We first discuss how robust policies can be computed for MDPs given only partial reward information using the minimax regret criterion. We then demonstrate how regret can be reduced by efficiently eliciting reward information using bound queries, using regret-reduction as a means for choosing suitable queries. Empirical results demonstrate that regret-based reward elicitation offers an effective way to produce near-optimal policies without resorting to the precise specification of the entire reward function. ",Regret-based Reward Elicitation for Markov Decision Processes " Logical inference algorithms for conditional independence (CI) statements have important applications from testing consistency during knowledge elicitation to constraintbased structure learning of graphical models. We prove that the implication problem for CI statements is decidable, given that the size of the domains of the random variables is known and fixed. We will present an approximate logical inference algorithm which combines a falsification and a novel validation algorithm. The validation algorithm represents each set of CI statements as a sparse 0-1 matrix A and validates instances of the implication problem by solving specific linear programs with constraint matrix A. We will show experimentally that the algorithm is both effective and efficient in validating and falsifying instances of the probabilistic CI implication problem. ","Logical Inference Algorithms and Matrix Representations for Probabilistic Conditional Independence" " Computational analysis of time-course data with an underlying causal structure is needed in a variety of domains, including neural spike trains, stock price movements, and gene expression levels. However, it can be challenging to determine from just the numerical time course data alone what is coordinating the visible processes, to separate the underlying prima facie causes into genuine and spurious causes and to do so with a feasible computational complexity. For this purpose, we have been developing a novel algorithm based on a framework that combines notions of causality in philosophy with algorithmic approaches built on model checking and statistical techniques for multiple hypotheses testing. The causal relationships are described in terms of temporal logic formulae, reframing the inference problem in terms of model checking. The logic used, PCTL, allows description of both the time between cause and effect and the probability of this relationship being observed. We show that equipped with these causal formulae with their associated probabilities we may compute the average impact a cause makes to its effect and then discover statistically significant causes through the concepts of multiple hypothesis testing (treating each causal relationship as a hypothesis), and false discovery control. By exploring a well-chosen family of potentially all significant hypotheses with reasonably minimal description length, it is possible to tame the algorithm's computational complexity while exploring the nearly complete search-space of all prima facie causes. We have tested these ideas in a number of domains and illustrate them here with two examples. ",The Temporal Logic of Causal Structures " First-order probabilistic models combine representational power of first-order logic with graphical models. There is an ongoing effort to design lifted inference algorithms for first-order probabilistic models. We analyze lifted inference from the perspective of constraint processing and, through this viewpoint, we analyze and compare existing approaches and expose their advantages and limitations. Our theoretical results show that the wrong choice of constraint processing method can lead to exponential increase in computational complexity. Our empirical tests confirm the importance of constraint processing in lifted inference. This is the first theoretical and empirical study of constraint processing in lifted inference. ",Constraint Processing in Lifted Probabilistic Inference " A major benefit of graphical models is that most knowledge is captured in the model structure. Many models, however, produce inference problems with a lot of symmetries not reflected in the graphical structure and hence not exploitable by efficient inference techniques such as belief propagation (BP). In this paper, we present a new and simple BP algorithm, called counting BP, that exploits such additional symmetries. Starting from a given factor graph, counting BP first constructs a compressed factor graph of clusternodes and clusterfactors, corresponding to sets of nodes and factors that are indistinguishable given the evidence. Then it runs a modified BP algorithm on the compressed graph that is equivalent to running BP on the original factor graph. Our experiments show that counting BP is applicable to a variety of important AI tasks such as (dynamic) relational models and boolean model counting, and that significant efficiency gains are obtainable, often by orders of magnitude. ",Counting Belief Propagation " A Bayesian belief network models a joint distribution with an directed acyclic graph representing dependencies among variables and network parameters characterizing conditional distributions. The parameters are viewed as random variables to quantify uncertainty about their values. Belief nets are used to compute responses to queries; i.e., conditional probabilities of interest. A query is a function of the parameters, hence a random variable. Van Allen et al. (2001, 2008) showed how to quantify uncertainty about a query via a delta method approximation of its variance. We develop more accurate approximations for both query mean and variance. The key idea is to extend the query mean approximation to a ""doubled network"" involving two independent replicates. Our method assumes complete data and can be applied to discrete, continuous, and hybrid networks (provided discrete variables have only discrete parents). We analyze several improvements, and provide empirical studies to demonstrate their effectiveness. ","Improved Mean and Variance Approximations for Belief Net Responses via Network Doubling" " Generating optimal plans in highly dynamic environments is challenging. Plans are predicated on an assumed initial state, but this state can change unexpectedly during plan generation, potentially invalidating the planning effort. In this paper we make three contributions: (1) We propose a novel algorithm for generating optimal plans in settings where frequent, unexpected events interfere with planning. It is able to quickly distinguish relevant from irrelevant state changes, and to update the existing planning search tree if necessary. (2) We argue for a new criterion for evaluating plan adaptation techniques: the relative running time compared to the ""size"" of changes. This is significant since during recovery more changes may occur that need to be recovered from subsequently, and in order for this process of repeated recovery to terminate, recovery time has to converge. (3) We show empirically that our approach can converge and find optimal plans in environments that would ordinarily defy planning due to their high dynamics. ",Generating Optimal Plans in Highly-Dynamic Domains " We introduce a challenging real-world planning problem where actions must be taken at each location in a spatial area at each point in time. We use forestry planning as the motivating application. In Large Scale Spatial-Temporal (LSST) planning problems, the state and action spaces are defined as the cross-products of many local state and action spaces spread over a large spatial area such as a city or forest. These problems possess state uncertainty, have complex utility functions involving spatial constraints and we generally must rely on simulations rather than an explicit transition model. We define LSST problems as reinforcement learning problems and present a solution using policy gradients. We compare two different policy formulations: an explicit policy that identifies each location in space and the action to take there; and an abstract policy that defines the proportion of actions to take across all locations in space. We show that the abstract policy is more robust and achieves higher rewards with far fewer parameters than the elementary policy. This abstract policy is also a better fit to the properties that practitioners in LSST problem domains require for such methods to be widely useful. ","Seeing the Forest Despite the Trees: Large Scale Spatial-Temporal Decision Making" " This paper presents complexity analysis and variational methods for inference in probabilistic description logics featuring Boolean operators, quantification, qualified number restrictions, nominals, inverse roles and role hierarchies. Inference is shown to be PEXP-complete, and variational methods are designed so as to exploit logical inference whenever possible. ","Complexity Analysis and Variational Inference for Interpretation-based Probabilistic Description Logic" " Continuous-time Bayesian networks is a natural structured representation language for multicomponent stochastic processes that evolve continuously over time. Despite the compact representation, inference in such models is intractable even in relatively simple structured networks. Here we introduce a mean field variational approximation in which we use a product of inhomogeneous Markov processes to approximate a distribution over trajectories. This variational approach leads to a globally consistent distribution, which can be efficiently queried. Additionally, it provides a lower bound on the probability of observations, thus making it attractive for learning tasks. We provide the theoretical foundations for the approximation, an efficient implementation that exploits the wide range of highly optimized ordinary differential equations (ODE) solvers, experimentally explore characterizations of processes for which this approximation is suitable, and show applications to a large-scale realworld inference problem. ","Mean Field Variational Approximation for Continuous-Time Bayesian Networks" " We study a subclass of POMDPs, called Deterministic POMDPs, that is characterized by deterministic actions and observations. These models do not provide the same generality of POMDPs yet they capture a number of interesting and challenging problems, and permit more efficient algorithms. Indeed, some of the recent work in planning is built around such assumptions mainly by the quest of amenable models more expressive than the classical deterministic models. We provide results about the fundamental properties of Deterministic POMDPs, their relation with AND/OR search problems and algorithms, and their computational complexity. ",Deterministic POMDPs Revisited " We present a new method to propagate lower bounds on conditional probability distributions in conventional Bayesian networks. Our method guarantees to provide outer approximations of the exact lower bounds. A key advantage is that we can use any available algorithms and tools for Bayesian networks in order to represent and infer lower bounds. This new method yields results that are provable exact for trees with binary variables, and results which are competitive to existing approximations in credal networks for all other network structures. Our method is not limited to a specific kind of network structure. Basically, it is also not restricted to a specific kind of inference, but we restrict our analysis to prognostic inference in this article. The computational complexity is superior to that of other existing approaches. ","Lower Bound Bayesian Networks - An Efficient Inference of Lower Bounds on Probability Distributions in Bayesian Networks" " Soft sets, as a mathematical tool for dealing with uncertainty, have recently gained considerable attention, including some successful applications in information processing, decision, demand analysis, and forecasting. To construct new soft sets from given soft sets, some operations on soft sets have been proposed. Unfortunately, such operations cannot keep all classical set-theoretic laws true for soft sets. In this paper, we redefine the intersection, complement, and difference of soft sets and investigate the algebraic properties of these operations along with a known union operation. We find that the new operation system on soft sets inherits all basic properties of operations on classical sets, which justifies our definitions. ",Operations on soft sets revisited " Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that contains the two celebrated policy and value iteration methods. Despite its generality, MPI has not been thoroughly studied, especially its approximation form which is used when the state and/or action spaces are large or infinite. In this paper, we propose three implementations of approximate MPI (AMPI) that are extensions of well-known approximate DP algorithms: fitted-value iteration, fitted-Q iteration, and classification-based policy iteration. We provide error propagation analyses that unify those for approximate policy and value iteration. On the last classification-based implementation, we develop a finite-sample analysis that shows that MPI's main parameter allows to control the balance between the estimation error of the classifier and the overall value function approximation. ",Approximate Modified Policy Iteration " Even today in Twenty First Century Handwritten communication has its own stand and most of the times, in daily life it is globally using as means of communication and recording the information like to be shared with others. Challenges in handwritten characters recognition wholly lie in the variation and distortion of handwritten characters, since different people may use different style of handwriting, and direction to draw the same shape of the characters of their known script. This paper demonstrates the nature of handwritten characters, conversion of handwritten data into electronic data, and the neural network approach to make machine capable of recognizing hand written characters. ",Machine Recognition of Hand Written Characters using Neural Networks " A simplified description of Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Situation) is presented. We have adapted the TOPSIS description from existing Fuzzy theory literature and distilled the bare minimum concepts required for understanding and applying TOPSIS. An example has been worked out to illustrate the application of TOPSIS for a multi-criteria group decision making scenario. ",A Simplified Description of Fuzzy TOPSIS " In order to involve user knowledge in determining equality of sets, which may not be equal in the mathematical sense, three types of approximate (rough) equalities were introduced by Novotny and Pawlak ([8, 9, 10]). These notions were generalized by Tripathy, Mitra and Ojha ([13]), who introduced the concepts of approximate (rough) equivalences of sets. Rough equivalences capture equality of sets at a higher level than rough equalities. More properties of these concepts were established in [14]. Combining the conditions for the two types of approximate equalities, two more approximate equalities were introduced by Tripathy [12] and a comparative analysis of their relative efficiency was provided. In [15], the four types of approximate equalities were extended by considering rough fuzzy sets instead of only rough sets. In fact the concepts of leveled approximate equalities were introduced and properties were studied. In this paper we proceed further by introducing and studying the approximate equalities based on rough intuitionistic fuzzy sets instead of rough fuzzy sets. That is we introduce the concepts of approximate (rough)equalities of intuitionistic fuzzy sets and study their properties. We provide some real life examples to show the applications of rough equalities of fuzzy sets and rough equalities of intuitionistic fuzzy sets. ","Approximate Equalities on Rough Intuitionistic Fuzzy Sets and an Analysis of Approximate Equalities" " The causal structure of cognition can be simulated but not implemented computationally, just as the causal structure of a comet can be simulated but not implemented computationally. The only thing that allows us even to imagine otherwise is that cognition, unlike a comet, is invisible (to all but the cognizer). ",The Causal Topography of Cognition " Within the framework proposed in this paper, we address the issue of extending the certain networks to a fuzzy certain networks in order to cope with a vagueness and limitations of existing models for decision under imprecise and uncertain knowledge. This paper proposes a framework that combines two disciplines to exploit their own advantages in uncertain and imprecise knowledge representation problems. The framework proposed is a possibilistic logic based one in which Bayesian nodes and their properties are represented by local necessity-valued knowledge base. Data in properties are interpreted as set of valuated formulas. In our contribution possibilistic Bayesian networks have a qualitative part and a quantitative part, represented by local knowledge bases. The general idea is to study how a fusion of these two formalisms would permit representing compact way to solve efficiently problems for knowledge representation. We show how to apply possibility and necessity measures to the problem of knowledge representation with large scale data. On the other hand fuzzification of crisp certainty degrees to fuzzy variables improves the quality of the network and tends to bring smoothness and robustness in the network performance. The general aim is to provide a new approach for decision under uncertainty that combines three methodologies: Bayesian networks certainty distribution and fuzzy logic. ","Fuzzy Knowledge Representation Based on Possibilistic and Necessary Bayesian Networks" " The main objective of this paper is to develop a new semantic Network structure, based on the fuzzy sets theory, used in Artificial Intelligent system in order to provide effective on-line assistance to users of new technological systems. This Semantic Networks is used to describe the knowledge of an ""ideal"" expert while fuzzy sets are used both to describe the approximate and uncertain knowledge of novice users who intervene to match fuzzy labels of a query with categories from an ""ideal"" expert. The technical system we consider is a word processor software, with Objects such as ""Word"" and Goals such as ""Cut"" or ""Copy"". We suggest to consider the set of the system's Goals as a set of linguistic variables to which corresponds a set of possible linguistic values based on the fuzzy set. We consider, therefore, a set of interpretation's levels for these possible values to which corresponds a set of membership functions. We also propose a method to measure the similarity degree between different fuzzy linguistic variables for the partition of the semantic network in class of similar objects to make easy the diagnosis of the user's fuzzy queries. ","Use of Fuzzy Sets in Semantic Nets for Providing On-Line Assistance to User of Technological Systems" " Feature weighting is a technique used to approximate the optimal degree of influence of individual features. This paper presents a feature weighting method for Document Image Retrieval System (DIRS) based on keyword spotting. In this method, we weight the feature using coefficient of multiple correlations. Coefficient of multiple correlations can be used to describe the synthesized effects and correlation of each feature. The aim of this paper is to show that feature weighting increases the performance of DIRS. After applying the feature weighting method to DIRS the average precision is 93.23% and average recall become 98.66% respectively ","Feature Weighting for Improving Document Image Retrieval System Performance" " In this paper, we are trying to examine trade offs between fuzzy logic and certain Bayesian networks and we propose to combine their respective advantages into fuzzy certain Bayesian networks (FCBN), a certain Bayesian networks of fuzzy random variables. This paper deals with different definitions and classifications of uncertainty, sources of uncertainty, and theories and methodologies presented to deal with uncertainty. Fuzzification of crisp certainty degrees to fuzzy variables improves the quality of the network and tends to bring smoothness and robustness in the network performance. The aim is to provide a new approach for decision under uncertainty that combines three methodologies: Bayesian networks certainty distribution and fuzzy logic. Within the framework proposed in this paper, we address the issue of extending the certain networks to a fuzzy certain networks in order to cope with a vagueness and limitations of existing models for decision under imprecise and uncertain knowledge. ",Certain Bayesian Network based on Fuzzy knowledge Bases " Holding commercial negotiations and selecting the best supplier in supply chain management systems are among weaknesses of producers in production process. Therefore, applying intelligent systems may have an effective role in increased speed and improved quality in the selections .This paper introduces a system which tries to trade using multi-agents systems and holding negotiations between any agents. In this system, an intelligent agent is considered for each segment of chains which it tries to send order and receive the response with attendance in negotiation medium and communication with other agents .This paper introduces how to communicate between agents, characteristics of multi-agent and standard registration medium of each agent in the environment. JADE (Java Application Development Environment) was used for implementation and simulation of agents cooperation. ","An Intelligent Approach for Negotiating between chains in Supply Chain Management Systems" " The similarity between trajectory patterns in clustering has played an important role in discovering movement behaviour of different groups of mobile objects. Several approaches have been proposed to measure the similarity between sequences in trajectory data. Most of these measures are based on Euclidean space or on spatial network and some of them have been concerned with temporal aspect or ordering types. However, they are not appropriate to characteristics of spatiotemporal mobility patterns in wireless networks. In this paper, we propose a new similarity measure for mobility patterns in cellular space of wireless network. The framework for constructing our measure is composed of two phases as follows. First, we present formal definitions to capture mathematically two spatial and temporal similarity measures for mobility patterns. And then, we define the total similarity measure by means of a weighted combination of these similarities. The truth of the partial and total similarity measures are proved in mathematics. Furthermore, instead of the time interval or ordering, our work makes use of the timestamp at which two mobility patterns share the same cell. A case study is also described to give a comparison of the combination measure with other ones. ","A weighted combination similarity measure for mobility patterns in wireless networks" " Feature reduction is an important concept which is used for reducing dimensions to decrease the computation complexity and time of classification. Since now many approaches have been proposed for solving this problem, but almost all of them just presented a fix output for each input dataset that some of them aren't satisfied cases for classification. In this we proposed an approach as processing input dataset to increase accuracy rate of each feature extraction methods. First of all, a new concept called dispelling classes gradually (DCG) is proposed to increase separability of classes based on their labels. Next, this method is used to process input dataset of the feature reduction approaches to decrease the misclassification error rate of their outputs more than when output is achieved without any processing. In addition our method has a good quality to collate with noise based on adapting dataset with feature reduction approaches. In the result part, two conditions (With process and without that) are compared to support our idea by using some of UCI datasets. ","Dispelling Classes Gradually to Improve Quality of Feature Reduction Approaches" " Software aging is a phenomenon that refers to progressive performance degradation or transient failures or even crashes in long running software systems such as web servers. It mainly occurs due to the deterioration of operating system resource, fragmentation and numerical error accumulation. A primitive method to fight against software aging is software rejuvenation. Software rejuvenation is a proactive fault management technique aimed at cleaning up the system internal state to prevent the occurrence of more severe crash failures in the future. It involves occasionally stopping the running software, cleaning its internal state and restarting it. An optimized schedule for performing the software rejuvenation has to be derived in advance because a long running application could not be put down now and then as it may lead to waste of cost. This paper proposes a method to derive an accurate and optimized schedule for rejuvenation of a web server (Apache) by using Radial Basis Function (RBF) based Feed Forward Neural Network, a variant of Artificial Neural Networks (ANN). Aging indicators are obtained through experimental setup involving Apache web server and clients, which acts as input to the neural network model. This method is better than existing ones because usage of RBF leads to better accuracy and speed in convergence. ",Software Aging Analysis of Web Server Using Neural Networks " A software agent may be a member of a Multi-Agent System (MAS) which is collectively performing a range of complex and intelligent tasks. In the hospital, scheduling decisions are finding difficult to schedule because of the dynamic changes and distribution. In order to face this problem with dynamic changes in the hospital, a new method, Distributed Optimized Patient Scheduling with Grouping (DOPSG) has been proposed. The goal of this method is that there is no necessity for knowing patient agents information globally. With minimal information this method works effectively. Scheduling problem can be solved for multiple departments in the hospital. Patient agents have been scheduled to the resource agent based on the patient priority to reduce the waiting time of patient agent and to reduce idle time of resources. ",A Distributed Optimized Patient Scheduling using Partial Information " The approach described here allows using membership function to represent imprecise and uncertain knowledge by learning in Fuzzy Semantic Networks. This representation has a great practical interest due to the possibility to realize on the one hand, the construction of this membership function from a simple value expressing the degree of interpretation of an Object or a Goal as compared to an other and on the other hand, the adjustment of the membership function during the apprenticeship. We show, how to use these membership functions to represent the interpretation of an Object (respectively of a Goal) user as compared to an system Object (respectively to a Goal). We also show the possibility to make decision for each representation of an user Object compared to a system Object. This decision is taken by determining decision coefficient calculates according to the nucleus of the membership function of the user Object. ","Softening Fuzzy Knowledge Representation Tool with the Learning of New Words in Natural Language" " This paper presents a method of optimization, based on both Bayesian Analysis technical and Gallois Lattice, of a Fuzzy Semantic Networks. The technical System we use learn by interpreting an unknown word using the links created between this new word and known words. The main link is provided by the context of the query. When novice's query is confused with an unknown verb (goal) applied to a known noun denoting either an object in the ideal user's Network or an object in the user's Network, the system infer that this new verb corresponds to one of the known goal. With the learning of new words in natural language as the interpretation, which was produced in agreement with the user, the system improves its representation scheme at each experiment with a new user and, in addition, takes advantage of previous discussions with users. The semantic Net of user objects thus obtained by these kinds of learning is not always optimal because some relationships between couple of user objects can be generalized and others suppressed according to values of forces that characterize them. Indeed, to simplify the obtained Net, we propose to proceed to an inductive Bayesian analysis, on the Net obtained from Gallois lattice. The objective of this analysis can be seen as an operation of filtering of the obtained descriptive graph. ","Fuzzy Knowledge Representation, Learning and Optimization with Bayesian Analysis in Fuzzy Semantic Networks" " The approach described here allows to use the fuzzy Object Based Representation of imprecise and uncertain knowledge. This representation has a great practical interest due to the possibility to realize reasoning on classification with a fuzzy semantic network based system. For instance, the distinction between necessary, possible and user classes allows to take into account exceptions that may appear on fuzzy knowledge-base and facilitates integration of user's Objects in the base. This approach describes the theoretical aspects of the architecture of the whole experimental A.I. system we built in order to provide effective on-line assistance to users of new technological systems: the understanding of ""how it works"" and ""how to complete tasks"" from queries in quite natural languages. In our model, procedural semantic networks are used to describe the knowledge of an ""ideal"" expert while fuzzy sets are used both to describe the approximative and uncertain knowledge of novice users in fuzzy semantic networks which intervene to match fuzzy labels of a query with categories from our ""ideal"" expert. ","Uncertain and Approximative Knowledge Representation to Reasoning on Classification with a Fuzzy Networks Based System" " Answer Set Programming (ASP) is a well-established paradigm of declarative programming in close relationship with other declarative formalisms such as SAT Modulo Theories, Constraint Handling Rules, FO(.), PDDL and many others. Since its first informal editions, ASP systems have been compared in the now well-established ASP Competition. The Third (Open) ASP Competition, as the sequel to the ASP Competitions Series held at the University of Potsdam in Germany (2006-2007) and at the University of Leuven in Belgium in 2009, took place at the University of Calabria (Italy) in the first half of 2011. Participants competed on a pre-selected collection of benchmark problems, taken from a variety of domains as well as real world applications. The Competition ran on two tracks: the Model and Solve (M&S) Track, based on an open problem encoding, and open language, and open to any kind of system based on a declarative specification paradigm; and the System Track, run on the basis of fixed, public problem encodings, written in a standard ASP language. This paper discusses the format of the Competition and the rationale behind it, then reports the results for both tracks. Comparison with the second ASP competition and state-of-the-art solutions for some of the benchmark domains is eventually discussed. To appear in Theory and Practice of Logic Programming (TPLP). ",The third open Answer Set Programming competition " The paper introduces AND/OR importance sampling for probabilistic graphical models. In contrast to importance sampling, AND/OR importance sampling caches samples in the AND/OR space and then extracts a new sample mean from the stored samples. We prove that AND/OR importance sampling may have lower variance than importance sampling; thereby providing a theoretical justification for preferring it over importance sampling. Our empirical evaluation demonstrates that AND/OR importance sampling is far more accurate than importance sampling in many cases. ",AND/OR Importance Sampling " In this paper, we consider planning in stochastic shortest path (SSP) problems, a subclass of Markov Decision Problems (MDP). We focus on medium-size problems whose state space can be fully enumerated. This problem has numerous important applications, such as navigation and planning under uncertainty. We propose a new approach for constructing a multi-level hierarchy of progressively simpler abstractions of the original problem. Once computed, the hierarchy can be used to speed up planning by first finding a policy for the most abstract level and then recursively refining it into a solution to the original problem. This approach is fully automated and delivers a speed-up of two orders of magnitude over a state-of-the-art MDP solver on sample problems while returning near-optimal solutions. We also prove theoretical bounds on the loss of solution optimality resulting from the use of abstractions. ","Speeding Up Planning in Markov Decision Processes via Automatically Constructed Abstractions" " The problem of learning discrete Bayesian networks from data is encoded as a weighted MAX-SAT problem and the MaxWalkSat local search algorithm is used to address it. For each dataset, the per-variable summands of the (BDeu) marginal likelihood for different choices of parents ('family scores') are computed prior to applying MaxWalkSat. Each permissible choice of parents for each variable is encoded as a distinct propositional atom and the associated family score encoded as a 'soft' weighted single-literal clause. Two approaches to enforcing acyclicity are considered: either by encoding the ancestor relation or by attaching a total order to each graph and encoding that. The latter approach gives better results. Learning experiments have been conducted on 21 synthetic datasets sampled from 7 BNs. The largest dataset has 10,000 datapoints and 60 variables producing (for the 'ancestor' encoding) a weighted CNF input file with 19,932 atoms and 269,367 clauses. For most datasets, MaxWalkSat quickly finds BNs with higher BDeu score than the 'true' BN. The effect of adding prior information is assessed. It is further shown that Bayesian model averaging can be effected by collecting BNs generated during the search. ",Bayesian network learning by compiling to weighted MAX-SAT " This paper describes a new algorithm to solve the decision making problem in Influence Diagrams based on algorithms for credal networks. Decision nodes are associated to imprecise probability distributions and a reformulation is introduced that finds the global maximum strategy with respect to the expected utility. We work with Limited Memory Influence Diagrams, which generalize most Influence Diagram proposals and handle simultaneous decisions. Besides the global optimum method, we explore an anytime approximate solution with a guaranteed maximum error and show that imprecise probabilities are handled in a straightforward way. Complexity issues and experiments with random diagrams and an effects-based military planning problem are discussed. ",Strategy Selection in Influence Diagrams using Imprecise Probabilities " A graphical multiagent model (GMM) represents a joint distribution over the behavior of a set of agents. One source of knowledge about agents' behavior may come from gametheoretic analysis, as captured by several graphical game representations developed in recent years. GMMs generalize this approach to express arbitrary distributions, based on game descriptions or other sources of knowledge bearing on beliefs about agent behavior. To illustrate the flexibility of GMMs, we exhibit game-derived models that allow probabilistic deviation from equilibrium, as well as models based on heuristic action choice. We investigate three different methods of integrating these models into a single model representing the combined knowledge sources. To evaluate the predictive performance of the combined model, we treat as actual outcome the behavior produced by a reinforcement learning process. We find that combining the two knowledge sources, using any of the methods, provides better predictions than either source alone. Among the combination methods, mixing data outperforms the opinion pool and direct update methods investigated in this empirical trial. ",Knowledge Combination in Graphical Multiagent Model " We conjecture that the worst case number of experiments necessary and sufficient to discover a causal graph uniquely given its observational Markov equivalence class can be specified as a function of the largest clique in the Markov equivalence class. We provide an algorithm that computes intervention sets that we believe are optimal for the above task. The algorithm builds on insights gained from the worst case analysis in Eberhardt et al. (2005) for sequences of experiments when all possible directed acyclic graphs over N variables are considered. A simulation suggests that our conjecture is correct. We also show that a generalization of our conjecture to other classes of possible graph hypotheses cannot be given easily, and in what sense the algorithm is then no longer optimal. ",Almost Optimal Intervention Sets for Causal Discovery " Bounded policy iteration is an approach to solving infinite-horizon POMDPs that represents policies as stochastic finite-state controllers and iteratively improves a controller by adjusting the parameters of each node using linear programming. In the original algorithm, the size of the linear programs, and thus the complexity of policy improvement, depends on the number of parameters of each node, which grows with the size of the controller. But in practice, the number of parameters of a node with non-zero values is often very small, and does not grow with the size of the controller. Based on this observation, we develop a version of bounded policy iteration that leverages the sparse structure of a stochastic finite-state controller. In each iteration, it improves a policy by the same amount as the original algorithm, but with much better scalability. ",Sparse Stochastic Finite-State Controllers for POMDPs " Approximate inference in dynamic systems is the problem of estimating the state of the system given a sequence of actions and partial observations. High precision estimation is fundamental in many applications like diagnosis, natural language processing, tracking, planning, and robotics. In this paper we present an algorithm that samples possible deterministic executions of a probabilistic sequence. The algorithm takes advantage of a compact representation (using first order logic) for actions and world states to improve the precision of its estimation. Theoretical and empirical results show that the algorithm's expected error is smaller than propositional sampling and Sequential Monte Carlo (SMC) sampling techniques. ",Sampling First Order Logical Particles " While known algorithms for sensitivity analysis and parameter tuning in probabilistic networks have a running time that is exponential in the size of the network, the exact computational complexity of these problems has not been established as yet. In this paper we study several variants of the tuning problem and show that these problems are NPPP-complete in general. We further show that the problems remain NP-complete or PP-complete, for a number of restricted variants. These complexity results provide insight in whether or not recent achievements in sensitivity analysis and tuning can be extended to more general, practicable methods. ","The Computational Complexity of Sensitivity Analysis and Parameter Tuning" " Approximate linear programming (ALP) is an efficient approach to solving large factored Markov decision processes (MDPs). The main idea of the method is to approximate the optimal value function by a set of basis functions and optimize their weights by linear programming (LP). This paper proposes a new ALP approximation. Comparing to the standard ALP formulation, we decompose the constraint space into a set of low-dimensional spaces. This structure allows for solving the new LP efficiently. In particular, the constraints of the LP can be satisfied in a compact form without an exponential dependence on the treewidth of ALP constraints. We study both practical and theoretical aspects of the proposed approach. Moreover, we demonstrate its scale-up potential on an MDP with more than 2^100 states. ",Partitioned Linear Programming Approximations for MDPs " Graphical models are usually learned without regard to the cost of doing inference with them. As a result, even if a good model is learned, it may perform poorly at prediction, because it requires approximate inference. We propose an alternative: learning models with a score function that directly penalizes the cost of inference. Specifically, we learn arithmetic circuits with a penalty on the number of edges in the circuit (in which the cost of inference is linear). Our algorithm is equivalent to learning a Bayesian network with context-specific independence by greedily splitting conditional distributions, at each step scoring the candidates by compiling the resulting network into an arithmetic circuit, and using its size as the penalty. We show how this can be done efficiently, without compiling a circuit from scratch for each candidate. Experiments on several real-world domains show that our algorithm is able to learn tractable models with very large treewidth, and yields more accurate predictions than a standard context-specific Bayesian network learner, in far less time. ",Learning Arithmetic Circuits " An efficient policy search algorithm should estimate the local gradient of the objective function, with respect to the policy parameters, from as few trials as possible. Whereas most policy search methods estimate this gradient by observing the rewards obtained during policy trials, we show, both theoretically and empirically, that taking into account the sensor data as well gives better gradient estimates and hence faster learning. The reason is that rewards obtained during policy execution vary from trial to trial due to noise in the environment; sensor data, which correlates with the noise, can be used to partially correct for this variation, resulting in an estimatorwith lower variance. ",Improving Gradient Estimation by Incorporating Sensor Data " Bayesian networks can be used to extract explanations about the observed state of a subset of variables. In this paper, we explicate the desiderata of an explanation and confront them with the concept of explanation proposed by existing methods. The necessity of taking into account causal approaches when a causal graph is available is discussed. We then introduce causal explanation trees, based on the construction of explanation trees using the measure of causal information ow (Ay and Polani, 2006). This approach is compared to several other methods on known networks. ",Explanation Trees for Causal Bayesian Networks " Model-based Bayesian reinforcement learning has generated significant interest in the AI community as it provides an elegant solution to the optimal exploration-exploitation tradeoff in classical reinforcement learning. Unfortunately, the applicability of this type of approach has been limited to small domains due to the high complexity of reasoning about the joint posterior over model parameters. In this paper, we consider the use of factored representations combined with online planning techniques, to improve scalability of these methods. The main contribution of this paper is a Bayesian framework for learning the structure and parameters of a dynamical system, while also simultaneously planning a (near-)optimal sequence of actions. ",Model-Based Bayesian Reinforcement Learning in Large Structured Domains " In this work we present Cutting Plane Inference (CPI), a Maximum A Posteriori (MAP) inference method for Statistical Relational Learning. Framed in terms of Markov Logic and inspired by the Cutting Plane Method, it can be seen as a meta algorithm that instantiates small parts of a large and complex Markov Network and then solves these using a conventional MAP method. We evaluate CPI on two tasks, Semantic Role Labelling and Joint Entity Resolution, while plugging in two different MAP inference methods: the current method of choice for MAP inference in Markov Logic, MaxWalkSAT, and Integer Linear Programming. We observe that when used with CPI both methods are significantly faster than when used alone. In addition, CPI improves the accuracy of MaxWalkSAT and maintains the exactness of Integer Linear Programming. ",Improving the Accuracy and Efficiency of MAP Inference for Markov Logic " Deciding what to sense is a crucial task, made harder by dependencies and by a nonadditive utility function. We develop approximation algorithms for selecting an optimal set of measurements, under a dependency structure modeled by a tree-shaped Bayesian network (BN). Our approach is a generalization of composing anytime algorithm represented by conditional performance profiles. This is done by relaxing the input monotonicity assumption, and extending the local compilation technique to more general classes of performance profiles (PPs). We apply the extended scheme to selecting a subset of measurements for choosing a maximum expectation variable in a binary valued BN, and for minimizing the worst variance in a Gaussian BN. ","Observation Subset Selection as Local Compilation of Performance Profiles" " This paper develops a measure for bounding the performance of AND/OR search algorithms for solving a variety of queries over graphical models. We show how drawing a connection to the recent notion of hypertree decompositions allows to exploit determinism in the problem specification and produce tighter bounds. We demonstrate on a variety of practical problem instances that we are often able to improve upon existing bounds by several orders of magnitude. ",Bounding Search Space Size via (Hyper)tree Decompositions " We present and evaluate new techniques for designing algorithm portfolios. In our view, the problem has both a scheduling aspect and a machine learning aspect. Prior work has largely addressed one of the two aspects in isolation. Building on recent work on the scheduling aspect of the problem, we present a technique that addresses both aspects simultaneously and has attractive theoretical guarantees. Experimentally, we show that this technique can be used to improve the performance of state-of-the-art algorithms for Boolean satisfiability, zero-one integer programming, and A.I. planning. ",New Techniques for Algorithm Portfolio Design " Numerous temporal inference tasks such as fault monitoring and anomaly detection exhibit a persistence property: for example, if something breaks, it stays broken until an intervention. When modeled as a Dynamic Bayesian Network, persistence adds dependencies between adjacent time slices, often making exact inference over time intractable using standard inference algorithms. However, we show that persistence implies a regular structure that can be exploited for efficient inference. We present three successively more general classes of models: persistent causal chains (PCCs), persistent causal trees (PCTs) and persistent polytrees (PPTs), and the corresponding exact inference algorithms that exploit persistence. We show that analytic asymptotic bounds for our algorithms compare favorably to junction tree inference; and we demonstrate empirically that we can perform exact smoothing on the order of 100 times faster than the approximate Boyen-Koller method on randomly generated instances of persistent tree models. We also show how to handle non-persistent variables and how persistence can be exploited effectively for approximate filtering. ",Efficient inference in persistent Dynamic Bayesian Networks " Planning can often be simpli ed by decomposing the task into smaller tasks arranged hierarchically. Charlin et al. [4] recently showed that the hierarchy discovery problem can be framed as a non-convex optimization problem. However, the inherent computational di culty of solving such an optimization problem makes it hard to scale to realworld problems. In another line of research, Toussaint et al. [18] developed a method to solve planning problems by maximumlikelihood estimation. In this paper, we show how the hierarchy discovery problem in partially observable domains can be tackled using a similar maximum likelihood approach. Our technique rst transforms the problem into a dynamic Bayesian network through which a hierarchical structure can naturally be discovered while optimizing the policy. Experimental results demonstrate that this approach scales better than previous techniques based on non-convex optimization. ",Hierarchical POMDP Controller Optimization by Likelihood Maximization " We address the problem of identifying dynamic sequential plans in the framework of causal Bayesian networks, and show that the problem is reduced to identifying causal effects, for which there are complete identi cation algorithms available in the literature. ",Identifying Dynamic Sequential Plans " In this paper we introduce Refractor Importance Sampling (RIS), an improvement to reduce error variance in Bayesian network importance sampling propagation under evidential reasoning. We prove the existence of a collection of importance functions that are close to the optimal importance function under evidential reasoning. Based on this theoretic result we derive the RIS algorithm. RIS approaches the optimal importance function by applying localized arc changes to minimize the divergence between the evidence-adjusted importance function and the optimal importance function. The validity and performance of RIS is empirically tested with a large setof synthetic Bayesian networks and two real-world networks. ",Refractor Importance Sampling " The paper introduces a generalization for known probabilistic models such as log-linear and graphical models, called here multiplicative models. These models, that express probabilities via product of parameters are shown to capture multiple forms of contextual independence between variables, including decision graphs and noisy-OR functions. An inference algorithm for multiplicative models is provided and its correctness is proved. The complexity analysis of the inference algorithm uses a more refined parameter than the tree-width of the underlying graph, and shows the computational cost does not exceed that of the variable elimination algorithm in graphical models. The paper ends with examples where using the new models and algorithm is computationally beneficial. ",Inference for Multiplicative Models " Online learning aims to perform nearly as well as the best hypothesis in hindsight. For some hypothesis classes, though, even finding the best hypothesis offline is challenging. In such offline cases, local search techniques are often employed and only local optimality guaranteed. For online decision-making with such hypothesis classes, we introduce local regret, a generalization of regret that aims to perform nearly as well as only nearby hypotheses. We then present a general algorithm to minimize local regret with arbitrary locality graphs. We also show how the graph structure can be exploited to drastically speed learning. These algorithms are then demonstrated on a diverse set of online problems: online disjunct learning, online Max-SAT, and online decision tree learning. ",On Local Regret " The rules of d-separation provide a framework for deriving conditional independence facts from model structure. However, this theory only applies to simple directed graphical models. We introduce relational d-separation, a theory for deriving conditional independence in relational models. We provide a sound, complete, and computationally efficient method for relational d-separation, and we present empirical results that demonstrate effectiveness. ",Identifying Independence in Relational Models " Decision circuits have been developed to perform efficient evaluation of influence diagrams [Bhattacharjya and Shachter, 2007], building on the advances in arithmetic circuits for belief network inference [Darwiche,2003]. In the process of model building and analysis, we perform sensitivity analysis to understand how the optimal solution changes in response to changes in the model. When sequential decision problems under uncertainty are represented as decision circuits, we can exploit the efficient solution process embodied in the decision circuit and the wealth of derivative information available to compute the value of information for the uncertainties in the problem and the effects of changes to model parameters on the value and the optimal strategy. ",Sensitivity analysis in decision circuits " This is the Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, which was held in Montreal, QC, Canada, June 18 - 21 2009. ","Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (2009)" " Computing the probability of evidence even with known error bounds is NP-hard. In this paper we address this hard problem by settling on an easier problem. We propose an approximation which provides high confidence lower bounds on probability of evidence but does not have any guarantees in terms of relative or absolute error. Our proposed approximation is a randomized importance sampling scheme that uses the Markov inequality. However, a straight-forward application of the Markov inequality may lead to poor lower bounds. We therefore propose several heuristic measures to improve its performance in practice. Empirical evaluation of our scheme with state-of- the-art lower bounding schemes reveals the promise of our approach. ","Studies in Lower Bounding Probabilities of Evidence using the Markov Inequality" " Choquet expected utility (CEU) is one of the most sophisticated decision criteria used in decision theory under uncertainty. It provides a generalisation of expected utility enhancing both descriptive and prescriptive possibilities. In this paper, we investigate the use of CEU for path-planning under uncertainty with a special focus on robust solutions. We first recall the main features of the CEU model and introduce some examples showing its descriptive potential. Then we focus on the search for Choquet-optimal paths in multivalued implicit graphs where costs depend on different scenarios. After discussing complexity issues, we propose two different heuristic search algorithms to solve the problem. Finally, numerical experiments are reported, showing the practical efficiency of the proposed algorithms. ",Search for Choquet-optimal paths under uncertainty " The ways in which an agent's actions affect the world can often be modeled compactly using a set of relational probabilistic planning rules. This paper addresses the problem of learning such rule sets for multiple related tasks. We take a hierarchical Bayesian approach, in which the system learns a prior distribution over rule sets. We present a class of prior distributions parameterized by a rule set prototype that is stochastically modified to produce a task-specific rule set. We also describe a coordinate ascent algorithm that iteratively optimizes the task-specific rule sets and the prior distribution. Experiments using this algorithm show that transferring information from related tasks significantly reduces the amount of training data required to predict action effects in blocks-world domains. ",Learning Probabilistic Relational Dynamics for Multiple Tasks " We formulate in this paper the mini-bucket algorithm for approximate inference in terms of exact inference on an approximate model produced by splitting nodes in a Bayesian network. The new formulation leads to a number of theoretical and practical implications. First, we show that branchand- bound search algorithms that use minibucket bounds may operate in a drastically reduced search space. Second, we show that the proposed formulation inspires new minibucket heuristics and allows us to analyze existing heuristics from a new perspective. Finally, we show that this new formulation allows mini-bucket approximations to benefit from recent advances in exact inference, allowing one to significantly increase the reach of these approximations. ","Node Splitting: A Scheme for Generating Upper Bounds in Bayesian Networks" " We describe the semantic foundations for elicitation of generalized additively independent (GAI) utilities using the minimax regret criterion, and propose several new query types and strategies for this purpose. Computational feasibility is obtained by exploiting the local GAI structure in the model. Our results provide a practical approach for implementing preference-based constrained configuration optimization as well as effective search in multiattribute product databases. ",Minimax regret based elicitation of generalized additive utilities " Although a number of related algorithms have been developed to evaluate influence diagrams, exploiting the conditional independence in the diagram, the exact solution has remained intractable for many important problems. In this paper we introduce decision circuits as a means to exploit the local structure usually found in decision problems and to improve the performance of influence diagram analysis. This work builds on the probabilistic inference algorithms using arithmetic circuits to represent Bayesian belief networks [Darwiche, 2003]. Once compiled, these arithmetic circuits efficiently evaluate probabilistic queries on the belief network, and methods have been developed to exploit both the global and local structure of the network. We show that decision circuits can be constructed in a similar fashion and promise similar benefits. ",Evaluating influence diagrams with decision circuits " We present a memory-bounded optimization approach for solving infinite-horizon decentralized POMDPs. Policies for each agent are represented by stochastic finite state controllers. We formulate the problem of optimizing these policies as a nonlinear program, leveraging powerful existing nonlinear optimization techniques for solving the problem. While existing solvers only guarantee locally optimal solutions, we show that our formulation produces higher quality controllers than the state-of-the-art approach. We also incorporate a shared source of randomness in the form of a correlation device to further increase solution quality with only a limited increase in space and time. Our experimental results show that nonlinear optimization can be used to provide high quality, concise solutions to decentralized decision problems under uncertainty. ",Optimizing Memory-Bounded Controllers for Decentralized POMDPs " Most real-world dynamic systems are composed of different components that often evolve at very different rates. In traditional temporal graphical models, such as dynamic Bayesian networks, time is modeled at a fixed granularity, generally selected based on the rate at which the fastest component evolves. Inference must then be performed at this fastest granularity, potentially at significant computational cost. Continuous Time Bayesian Networks (CTBNs) avoid time-slicing in the representation by modeling the system as evolving continuously over time. The expectation-propagation (EP) inference algorithm of Nodelman et al. (2005) can then vary the inference granularity over time, but the granularity is uniform across all parts of the system, and must be selected in advance. In this paper, we provide a new EP algorithm that utilizes a general cluster graph architecture where clusters contain distributions that can overlap in both space (set of variables) and time. This architecture allows different parts of the system to be modeled at very different time granularities, according to their current rate of evolution. We also provide an information-theoretic criterion for dynamically re-partitioning the clusters during inference to tune the level of approximation to the current rate of evolution. This avoids the need to hand-select the appropriate granularity, and allows the granularity to adapt as information is transmitted across the network. We present experiments demonstrating that this approach can result in significant computational savings. ",Reasoning at the Right Time Granularity " Compiling graphical models has recently been under intense investigation, especially for probabilistic modeling and processing. We present here a novel data structure for compiling weighted graphical models (in particular, probabilistic models), called AND/OR Multi-Valued Decision Diagram (AOMDD). This is a generalization of our previous work on constraint networks, to weighted models. The AOMDD is based on the frameworks of AND/OR search spaces for graphical models, and Ordered Binary Decision Diagrams (OBDD). The AOMDD is a canonical representation of a graphical model, and its size and compilation time are bounded exponentially by the treewidth of the graph, rather than pathwidth as is known for OBDDs. We discuss a Variable Elimination schedule for compilation, and present the general APPLY algorithm that combines two weighted AOMDDs, and also present a search based method for compilation method. The preliminary experimental evaluation is quite encouraging, showing the potential of the AOMDD data structure. ","AND/OR Multi-Valued Decision Diagrams (AOMDDs) for Weighted Graphical Models" " The paper evaluates the power of best-first search over AND/OR search spaces for solving the Most Probable Explanation (MPE) task in Bayesian networks. The main virtue of the AND/OR representation of the search space is its sensitivity to the structure of the problem, which can translate into significant time savings. In recent years depth-first AND/OR Branch-and- Bound algorithms were shown to be very effective when exploring such search spaces, especially when using caching. Since best-first strategies are known to be superior to depth-first when memory is utilized, exploring the best-first control strategy is called for. The main contribution of this paper is in showing that a recent extension of AND/OR search algorithms from depth-first Branch-and-Bound to best-first is indeed very effective for computing the MPE in Bayesian networks. We demonstrate empirically the superiority of the best-first search approach on various probabilistic networks. ",Best-First AND/OR Search for Most Probable Explanations " Searching the complete space of possible Bayesian networks is intractable for problems of interesting size, so Bayesian network structure learning algorithms, such as the commonly used Sparse Candidate algorithm, employ heuristics. However, these heuristics also restrict the types of relationships that can be learned exclusively from data. They are unable to learn relationships that exhibit ""correlation-immunity"", such as parity. To learn Bayesian networks in the presence of correlation-immune relationships, we extend the Sparse Candidate algorithm with a technique called ""skewing"". This technique uses the observation that relationships that are correlation-immune under a specific input distribution may not be correlation-immune under another, sufficiently different distribution. We show that by extending Sparse Candidate with this technique we are able to discover relationships between random variables that are approximately correlation-immune, with a significantly lower computational cost than the alternative of considering multiple parents of a node at a time. ",Learning Bayesian Network Structure from Correlation-Immune Data " Survey propagation (SP) is an exciting new technique that has been remarkably successful at solving very large hard combinatorial problems, such as determining the satisfiability of Boolean formulas. In a promising attempt at understanding the success of SP, it was recently shown that SP can be viewed as a form of belief propagation, computing marginal probabilities over certain objects called covers of a formula. This explanation was, however, shortly dismissed by experiments suggesting that non-trivial covers simply do not exist for large formulas. In this paper, we show that these experiments were misleading: not only do covers exist for large hard random formulas, SP is surprisingly accurate at computing marginals over these covers despite the existence of many cycles in the formulas. This re-opens a potentially simpler line of reasoning for understanding SP, in contrast to some alternative lines of explanation that have been proposed assuming covers do not exist. ",Survey Propagation Revisited " Relational Markov Random Fields are a general and flexible framework for reasoning about the joint distribution over attributes of a large number of interacting entities. The main computational difficulty in learning such models is inference. Even when dealing with complete data, where one can summarize a large domain by sufficient statistics, learning requires one to compute the expectation of the sufficient statistics given different parameter choices. The typical solution to this problem is to resort to approximate inference procedures, such as loopy belief propagation. Although these procedures are quite efficient, they still require computation that is on the order of the number of interactions (or features) in the model. When learning a large relational model over a complex domain, even such approximations require unrealistic running time. In this paper we show that for a particular class of relational MRFs, which have inherent symmetry, we can perform the inference needed for learning procedures using a template-level belief propagation. This procedure's running time is proportional to the size of the relational model rather than the size of the domain. Moreover, we show that this computational procedure is equivalent to sychronous loopy belief propagation. This enables a dramatic speedup in inference and learning time. We use this procedure to learn relational MRFs for capturing the joint distribution of large protein-protein interaction networks. ",Template Based Inference in Symmetric Relational Markov Random Fields " Preferences play an important role in our everyday lives. CP-networks, or CP-nets in short, are graphical models for representing conditional qualitative preferences under ceteris paribus (""all else being equal"") assumptions. Despite their intuitive nature and rich representation, dominance testing with CP-nets is computationally complex, even when the CP-nets are restricted to binary-valued preferences. Tractable algorithms exist for binary CP-nets, but these algorithms are incomplete for multi-valued CPnets. In this paper, we identify a class of multivalued CP-nets, which we call more-or-less CPnets, that have the same computational complexity as binary CP-nets. More-or-less CP-nets exploit the monotonicity of the attribute values and use intervals to aggregate values that induce similar preferences. We then present a search control rule for dominance testing that effectively prunes the search space while preserving completeness. ",More-or-Less CP-Networks " Relational Markov Decision Processes are a useful abstraction for complex reinforcement learning problems and stochastic planning problems. Recent work developed representation schemes and algorithms for planning in such problems using the value iteration algorithm. However, exact versions of more complex algorithms, including policy iteration, have not been developed or analyzed. The paper investigates this potential and makes several contributions. First we observe two anomalies for relational representations showing that the value of some policies is not well defined or cannot be calculated for restricted representation schemes used in the literature. On the other hand, we develop a variant of policy iteration that can get around these anomalies. The algorithm includes an aspect of policy improvement in the process of policy evaluation and thus differs from the original algorithm. We show that despite this difference the algorithm converges to the optimal policy. ",Policy Iteration for Relational MDPs " Combining first-order logic and probability has long been a goal of AI. Markov logic (Richardson & Domingos, 2006) accomplishes this by attaching weights to first-order formulas and viewing them as templates for features of Markov networks. Unfortunately, it does not have the full power of first-order logic, because it is only defined for finite domains. This paper extends Markov logic to infinite domains, by casting it in the framework of Gibbs measures (Georgii, 1988). We show that a Markov logic network (MLN) admits a Gibbs measure as long as each ground atom has a finite number of neighbors. Many interesting cases fall in this category. We also show that an MLN admits a unique measure if the weights of its non-unit clauses are small enough. We then examine the structure of the set of consistent measures in the non-unique case. Many important phenomena, including systems with phase transitions, are represented by MLNs with non-unique measures. We relate the problem of satisfiability in first-order logic to the properties of MLN measures, and discuss how Markov logic relates to previous infinite models. ",Markov Logic in Infinite Domains " Counterfactual statements, e.g., ""my headache would be gone had I taken an aspirin"" are central to scientific discourse, and are formally interpreted as statements derived from ""alternative worlds"". However, since they invoke hypothetical states of affairs, often incompatible with what is actually known or observed, testing counterfactuals is fraught with conceptual and practical difficulties. In this paper, we provide a complete characterization of ""testable counterfactuals,"" namely, counterfactual statements whose probabilities can be inferred from physical experiments. We provide complete procedures for discerning whether a given counterfactual is testable and, if so, expressing its probability in terms of experimental data. ",What Counterfactuals Can Be Tested " Memory-Bounded Dynamic Programming (MBDP) has proved extremely effective in solving decentralized POMDPs with large horizons. We generalize the algorithm and improve its scalability by reducing the complexity with respect to the number of observations from exponential to polynomial. We derive error bounds on solution quality with respect to this new approximation and analyze the convergence behavior. To evaluate the effectiveness of the improvements, we introduce a new, larger benchmark problem. Experimental results show that despite the high complexity of decentralized POMDPs, scalable solution techniques such as MBDP perform surprisingly well. ",Improved Memory-Bounded Dynamic Programming for Decentralized POMDPs " Assistive systems for persons with cognitive disabilities (e.g. dementia) are difficult to build due to the wide range of different approaches people can take to accomplishing the same task, and the significant uncertainties that arise from both the unpredictability of client's behaviours and from noise in sensor readings. Partially observable Markov decision process (POMDP) models have been used successfully as the reasoning engine behind such assistive systems for small multi-step tasks such as hand washing. POMDP models are a powerful, yet flexible framework for modelling assistance that can deal with uncertainty and utility. Unfortunately, POMDPs usually require a very labour intensive, manual procedure for their definition and construction. Our previous work has described a knowledge driven method for automatically generating POMDP activity recognition and context sensitive prompting systems for complex tasks. We call the resulting POMDP a SNAP (SyNdetic Assistance Process). The spreadsheet-like result of the analysis does not correspond to the POMDP model directly and the translation to a formal POMDP representation is required. To date, this translation had to be performed manually by a trained POMDP expert. In this paper, we formalise and automate this translation process using a probabilistic relational model (PRM) encoded in a relational database. We demonstrate the method by eliciting three assistance tasks from non-experts. We validate the resulting POMDP models using case-based simulations to show that they are reasonable for the domains. We also show a complete case study of a designer specifying one database, including an evaluation in a real-life experiment with a human actor. ","Relational Approach to Knowledge Engineering for POMDP-based Assistance Systems as a Translation of a Psychological Model" " There are several contexts of non-monotonic reasoning where a priority between rules is established whose purpose is preventing conflicts. One formalism that has been widely employed for non-monotonic reasoning is the sceptical one known as Defeasible Logic. In Defeasible Logic the tool used for conflict resolution is a preference relation between rules, that establishes the priority among them. In this paper we investigate how to modify such a preference relation in a defeasible logic theory in order to change the conclusions of the theory itself. We argue that the approach we adopt is applicable to legal reasoning where users, in general, cannot change facts or rules, but can propose their preferences about the relative strength of the rules. We provide a comprehensive study of the possible combinatorial cases and we identify and analyse the cases where the revision process is successful. After this analysis, we identify three revision/update operators and study them against the AGM postulates for belief revision operators, to discover that only a part of these postulates are satisfied by the three operators. ",Revision of Defeasible Logic Preferences " We apply decision theoretic techniques to construct non-player characters that are able to assist a human player in collaborative games. The method is based on solving Markov decision processes, which can be difficult when the game state is described by many variables. To scale to more complex games, the method allows decomposition of a game task into subtasks, each of which can be modelled by a Markov decision process. Intention recognition is used to infer the subtask that the human is currently performing, allowing the helper to assist the human in performing the correct task. Experiments show that the method can be effective, giving near-human level performance in helping a human in a collaborative game. ",CAPIR: Collaborative Action Planning with Intention Recognition " We consider the problem of using a heuristic policy to improve the value approximation by the Upper Confidence Bound applied in Trees (UCT) algorithm in non-adversarial settings such as planning with large-state space Markov Decision Processes. Current improvements to UCT focus on either changing the action selection formula at the internal nodes or the rollout policy at the leaf nodes of the search tree. In this work, we propose to add an auxiliary arm to each of the internal nodes, and always use the heuristic policy to roll out simulations at the auxiliary arms. The method aims to get fast convergence to optimal values at states where the heuristic policy is optimal, while retaining similar approximation as the original UCT in other states. We show that bootstrapping with the proposed method in the new algorithm, UCT-Aux, performs better compared to the original UCT algorithm and its variants in two benchmark experiment settings. We also examine conditions under which UCT-Aux works well. ",Bootstrapping Monte Carlo Tree Search with an Imperfect Heuristic " We consider in this paper the formulation of approximate inference in Bayesian networks as a problem of exact inference on an approximate network that results from deleting edges (to reduce treewidth). We have shown in earlier work that deleting edges calls for introducing auxiliary network parameters to compensate for lost dependencies, and proposed intuitive conditions for determining these parameters. We have also shown that our method corresponds to IBP when enough edges are deleted to yield a polytree, and corresponds to some generalizations of IBP when fewer edges are deleted. In this paper, we propose a different criteria for determining auxiliary parameters based on optimizing the KL-divergence between the original and approximate networks. We discuss the relationship between the two methods for selecting parameters, shedding new light on IBP and its generalizations. We also discuss the application of our new method to approximating inference problems which are exponential in constrained treewidth, including MAP and nonmyopic value of information. ","A Variational Approach for Approximating Bayesian Networks by Edge Deletion" " In Bayesian networks, a Most Probable Explanation (MPE) is a complete variable instantiation with a highest probability given the current evidence. In this paper, we discuss the problem of finding robustness conditions of the MPE under single parameter changes. Specifically, we ask the question: How much change in a single network parameter can we afford to apply while keeping the MPE unchanged? We will describe a procedure, which is the first of its kind, that computes this answer for each parameter in the Bayesian network variable in time O(n exp(w)), where n is the number of network variables and w is its treewidth. ",On the Robustness of Most Probable Explanations " The paper analyzes theoretically and empirically the performance of likelihood weighting (LW) on a subset of nodes in Bayesian networks. The proposed scheme requires fewer samples to converge due to reduction in sampling variance. The method exploits the structure of the network to bound the complexity of exact inference used to compute sampling distributions, similar to Gibbs cutset sampling. Yet, the extension of the previosly proposed cutset sampling principles to likelihood weighting is non-trivial due to differences in the sampling processes of Gibbs sampler and LW. We demonstrate empirically that likelihood weighting on a cutset (LWLC) is effective time-wise and has a lower rejection rate than LW when applied to networks with many deterministic probabilities. Finally, we show that the performance of likelihood weighting on a cutset can be improved further by caching computed sampling distributions and, consequently, learning 'zeros' of the target distribution. ",Cutset Sampling with Likelihood Weighting " Linear-time computational techniques have been developed for combining evidence which is available on a number of contending hypotheses. They offer a means of making the computation-intensive calculations involved more efficient in certain circumstances. Unfortunately, they restrict the orthogonal sum of evidential functions to the dichotomous structure applies only to elements and their complements. In this paper, we present a novel evidence structure in terms of a triplet and a set of algorithms for evidential reasoning. The merit of this structure is that it divides a set of evidence into three subsets, distinguishing trivial evidential elements from important ones focusing some particular elements. It avoids the deficits of the dichotomous structure in representing the preference of evidence and estimating the basic probability assignment of evidence. We have established a formalism for this structure and the general formulae for combining pieces of evidence in the form of the triplet, which have been theoretically justified. ",An Efficient Triplet-based Algorithm for Evidential Reasoning " Separable Bayesian Networks, or the Influence Model, are dynamic Bayesian Networks in which the conditional probability distribution can be separated into a function of only the marginal distribution of a node's neighbors, instead of the joint distributions. In terms of modeling, separable networks has rendered possible siginificant reduction in complexity, as the state space is only linear in the number of variables on the network, in contrast to a typical state space which is exponential. In this work, We describe the connection between an arbitrary Conditional Probability Table (CPT) and separable systems using linear algebra. We give an alternate proof on the equivalence of sufficiency and separability. We present a computational method for testing whether a given CPT is separable. ",Linear Algebra Approach to Separable Bayesian Networks " This paper is concerned with graphical criteria that can be used to solve the problem of identifying casual effects from nonexperimental data in a causal Bayesian network structure, i.e., a directed acyclic graph that represents causal relationships. We first review Pearl's work on this topic [Pearl, 1995], in which several useful graphical criteria are presented. Then we present a complete algorithm [Huang and Valtorta, 2006b] for the identifiability problem. By exploiting the completeness of this algorithm, we prove that the three basic do-calculus rules that Pearl presents are complete, in the sense that, if a causal effect is identifiable, there exists a sequence of applications of the rules of the do-calculus that transforms the causal effect formula into a formula that only includes observational quantities. ",Pearl's Calculus of Intervention Is Complete " This paper studies a new and more general axiomatization than one presented previously for preference on likelihood gambles. Likelihood gambles describe actions in a situation where a decision maker knows multiple probabilistic models and a random sample generated from one of those models but does not know prior probability of models. This new axiom system is inspired by Jensen's axiomatization of probabilistic gambles. Our approach provides a new perspective to the role of data in decision making under ambiguity. It avoids one of the most controversial issue of Bayesian methodology namely the assumption of prior probability. ",A new axiomatization for likelihood gambles " Continuous-time Bayesian networks (CTBNs) are graphical representations of multi-component continuous-time Markov processes as directed graphs. The edges in the network represent direct influences among components. The joint rate matrix of the multi-component process is specified by means of conditional rate matrices for each component separately. This paper addresses the situation where some of the components evolve on a time scale that is much shorter compared to the time scale of the other components. In this paper, we prove that in the limit where the separation of scales is infinite, the Markov process converges (in distribution, or weakly) to a reduced, or effective Markov process that only involves the slow components. We also demonstrate that for reasonable separation of scale (an order of magnitude) the reduced process is a good approximation of the marginal process over the slow components. We provide a simple procedure for building a reduced CTBN for this effective process, with conditional rate matrices that can be directly calculated from the original CTBN, and discuss the implications for approximate reasoning in large systems. ","Dimension Reduction in Singularly Perturbed Continuous-Time Bayesian Networks" " A popular approach to solving large probabilistic systems relies on aggregating states based on a measure of similarity. Many approaches in the literature are heuristic. A number of recent methods rely instead on metrics based on the notion of bisimulation, or behavioral equivalence between states (Givan et al, 2001, 2003; Ferns et al, 2004). An integral component of such metrics is the Kantorovich metric between probability distributions. However, while this metric enables many satisfying theoretical properties, it is costly to compute in practice. In this paper, we use techniques from network optimization and statistical sampling to overcome this problem. We obtain in this manner a variety of distance functions for MDP state aggregation, which differ in the tradeoff between time and space complexity, as well as the quality of the aggregation. We provide an empirical evaluation of these trade-offs. ",Methods for computing state similarity in Markov Decision Processes " Inference for probabilistic graphical models is still very much a practical challenge in large domains. The commonly used and effective belief propagation (BP) algorithm and its generalizations often do not converge when applied to hard, real-life inference tasks. While it is widely recognized that the scheduling of messages in these algorithms may have significant consequences, this issue remains largely unexplored. In this work, we address the question of how to schedule messages for asynchronous propagation so that a fixed point is reached faster and more often. We first show that any reasonable asynchronous BP converges to a unique fixed point under conditions similar to those that guarantee convergence of synchronous BP. In addition, we show that the convergence rate of a simple round-robin schedule is at least as good as that of synchronous propagation. We then propose residual belief propagation (RBP), a novel, easy-to-implement, asynchronous propagation algorithm that schedules messages in an informed way, that pushes down a bound on the distance from the fixed point. Finally, we demonstrate the superiority of RBP over state-of-the-art methods for a variety of challenging synthetic and real-life problems: RBP converges significantly more often than other methods; and it significantly reduces running time until convergence, even when other methods converge. ","Residual Belief Propagation: Informed Scheduling for Asynchronous Message Passing" " Directed possibly cyclic graphs have been proposed by Didelez (2000) and Nodelmann et al. (2002) in order to represent the dynamic dependencies among stochastic processes. These dependencies are based on a generalization of Granger-causality to continuous time, first developed by Schweder (1970) for Markov processes, who called them local dependencies. They deserve special attention as they are asymmetric unlike stochastic (in)dependence. In this paper we focus on their graphical representation and develop a suitable, i.e. asymmetric notion of separation, called delta-separation. The properties of this graph separation as well as of local independence are investigated in detail within a framework of asymmetric (semi)graphoids allowing a deeper insight into what information can be read off these graphs. ",Asymmetric separation for local independence graphs " There exist several architectures to solve influence diagrams using local computations, such as the Shenoy-Shafer, the HUGIN, or the Lazy Propagation architectures. They all extend usual variable elimination algorithms thanks to the use of so-called 'potentials'. In this paper, we introduce a new architecture, called the Multi-operator Cluster DAG architecture, which can produce decompositions with an improved constrained induced-width, and therefore induce potentially exponential gains. Its principle is to benefit from the composite nature of influence diagrams, instead of using uniform potentials, in order to better analyze the problem structure. ",From influence diagrams to multi-operator cluster DAGs " Tasks such as record linkage and multi-target tracking, which involve reconstructing the set of objects that underlie some observed data, are particularly challenging for probabilistic inference. Recent work has achieved efficient and accurate inference on such problems using Markov chain Monte Carlo (MCMC) techniques with customized proposal distributions. Currently, implementing such a system requires coding MCMC state representations and acceptance probability calculations that are specific to a particular application. An alternative approach, which we pursue in this paper, is to use a general-purpose probabilistic modeling language (such as BLOG) and a generic Metropolis-Hastings MCMC algorithm that supports user-supplied proposal distributions. Our algorithm gains flexibility by using MCMC states that are only partial descriptions of possible worlds; we provide conditions under which MCMC over partial worlds yields correct answers to queries. We also show how to use a context-specific Bayes net to identify the factors in the acceptance probability that need to be computed for a given proposed move. Experimental results on a citation matching task show that our general-purpose MCMC engine compares favorably with an application-specific system. ",General-Purpose MCMC Inference over Relational Structures " In recent years Bayesian networks (BNs) with a mixture of continuous and discrete variables have received an increasing level of attention. We present an architecture for exact belief update in Conditional Linear Gaussian BNs (CLG BNs). The architecture is an extension of lazy propagation using operations of Lauritzen & Jensen [6] and Cowell [2]. By decomposing clique and separator potentials into sets of factors, the proposed architecture takes advantage of independence and irrelevance properties induced by the structure of the graph and the evidence. The resulting benefits are illustrated by examples. Results of a preliminary empirical performance evaluation indicate a significant potential of the proposed architecture. ",Belief Update in CLG Bayesian Networks With Lazy Propagation " We set up a model for reasoning about metric spaces with belief theoretic measures. The uncertainty in these spaces stems from both probability and metric. To represent both aspect of uncertainty, we choose an expected distance function as a measure of uncertainty. A formal logical system is constructed for the reasoning about expected distance. Soundness and completeness are shown for this logic. For reasoning on product metric space with uncertainty, a new metric is defined and shown to have good properties. ",Reasoning about Uncertainty in Metric Spaces " The National Airspace System (NAS) is a large and complex system with thousands of interrelated components: administration, control centers, airports, airlines, aircraft, passengers, etc. The complexity of the NAS creates many difficulties in management and control. One of the most pressing problems is flight delay. Delay creates high cost to airlines, complaints from passengers, and difficulties for airport operations. As demand on the system increases, the delay problem becomes more and more prominent. For this reason, it is essential for the Federal Aviation Administration to understand the causes of delay and to find ways to reduce delay. Major contributing factors to delay are congestion at the origin airport, weather, increasing demand, and air traffic management (ATM) decisions such as the Ground Delay Programs (GDP). Delay is an inherently stochastic phenomenon. Even if all known causal factors could be accounted for, macro-level national airspace system (NAS) delays could not be predicted with certainty from micro-level aircraft information. This paper presents a stochastic model that uses Bayesian Networks (BNs) to model the relationships among different components of aircraft delay and the causal factors that affect delays. A case study on delays of departure flights from Chicago O'Hare international airport (ORD) to Hartsfield-Jackson Atlanta International Airport (ATL) reveals how local and system level environmental and human-caused factors combine to affect components of delay, and how these components contribute to the final arrival delay at the destination airport. ",Propagation of Delays in the National Airspace System " We introduce a new dynamic model with the capability of recognizing both activities that an individual is performing as well as where that ndividual is located. Our model is novel in that it utilizes a dynamic graphical model to jointly estimate both activity and spatial context over time based on the simultaneous use of asynchronous observations consisting of GPS measurements, and measurements from a small mountable sensor board. Joint inference is quite desirable as it has the ability to improve accuracy of the model. A key goal, however, in designing our overall system is to be able to perform accurate inference decisions while minimizing the amount of hardware an individual must wear. This minimization leads to greater comfort and flexibility, decreased power requirements and therefore increased battery life, and reduced cost. We show results indicating that our joint measurement model outperforms measurements from either the sensor board or GPS alone, using two types of probabilistic inference procedures, namely particle filtering and pruned exact inference. ",Recognizing Activities and Spatial Context Using Wearable Sensors " We study the problem of learning the best Bayesian network structure with respect to a decomposable score such as BDe, BIC or AIC. This problem is known to be NP-hard, which means that solving it becomes quickly infeasible as the number of variables increases. Nevertheless, in this paper we show that it is possible to learn the best Bayesian network structure with over 30 variables, which covers many practically interesting cases. Our algorithm is less complicated and more efficient than the techniques presented earlier. It can be easily parallelized, and offers a possibility for efficient exploration of the best networks consistent with different variable orderings. In the experimental part of the paper we compare the performance of the algorithm to the previous state-of-the-art algorithm. Free source-code and an online-demo can be found at http://b-course.hiit.fi/bene. ","A simple approach for finding the globally optimal Bayesian network structure" " Recent work on approximate linear programming (ALP) techniques for first-order Markov Decision Processes (FOMDPs) represents the value function linearly w.r.t. a set of first-order basis functions and uses linear programming techniques to determine suitable weights. This approach offers the advantage that it does not require simplification of the first-order value function, and allows one to solve FOMDPs independent of a specific domain instantiation. In this paper, we address several questions to enhance the applicability of this work: (1) Can we extend the first-order ALP framework to approximate policy iteration to address performance deficiencies of previous approaches? (2) Can we automatically generate basis functions and evaluate their impact on value function quality? (3) How can we decompose intractable problems with universally quantified rewards into tractable subproblems? We propose answers to these questions along with a number of novel optimizations and provide a comparative empirical evaluation on logistics problems from the ICAPS 2004 Probabilistic Planning Competition. ",Practical Linear Value-approximation Techniques for First-order MDPs " In this paper we promote introducing software verification and control flow graph similarity measurement in automated evaluation of students' programs. We present a new grading framework that merges results obtained by combination of these two approaches with results obtained by automated testing, leading to improved quality and precision of automated grading. These two approaches are also useful in providing a comprehensible feedback that can help students to improve the quality of their programs We also present our corresponding tools that are publicly available and open source. The tools are based on LLVM low-level intermediate code representation, so they could be applied to a number of programming languages. Experimental evaluation of the proposed grading framework is performed on a corpus of university students' programs written in programming language C. Results of the experiments show that automatically generated grades are highly correlated with manually determined grades suggesting that the presented tools can find real-world applications in studying and grading. ","Software Verification and Graph Similarity for Automated Evaluation of Students' Assignments" " The use of Artificial Intelligence is finding prominence not only in core computer areas, but also in cross disciplinary areas including medical diagnosis. In this paper, we present a rule based Expert System used in diagnosis of Cerebral Palsy. The expert system takes user input and depending on the symptoms of the patient, diagnoses if the patient is suffering from Cerebral Palsy. The Expert System also classifies the Cerebral Palsy as mild, moderate or severe based on the presented symptoms. ",Rule Based Expert System for Cerebral Palsy Diagnosis " This paper focuses on the restart strategy of CMA-ES on multi-modal functions. A first alternative strategy proceeds by decreasing the initial step-size of the mutation while doubling the population size at each restart. A second strategy adaptively allocates the computational budget among the restart settings in the BIPOP scheme. Both restart strategies are validated on the BBOB benchmark; their generality is also demonstrated on an independent real-world problem suite related to spacecraft trajectory optimization. ",Alternative Restart Strategies for CMA-ES " Covering is an important type of data structure while covering-based rough sets provide an efficient and systematic theory to deal with covering data. In this paper, we use boolean matrices to represent and axiomatize three types of covering approximation operators. First, we define two types of characteristic matrices of a covering which are essentially square boolean ones, and their properties are studied. Through the characteristic matrices, three important types of covering approximation operators are concisely equivalently represented. Second, matrix representations of covering approximation operators are used in boolean matrix decomposition. We provide a sufficient and necessary condition for a square boolean matrix to decompose into the boolean product of another one and its transpose. And we develop an algorithm for this boolean matrix decomposition. Finally, based on the above results, these three types of covering approximation operators are axiomatized using boolean matrices. In a word, this work borrows extensively from boolean matrices and present a new view to study covering-based rough sets. ","Characteristic matrix of covering and its application to boolean matrix decomposition and axiomatization" " Principal Component Analysis (PCA) finds a linear mapping and maximizes the variance of the data which makes PCA sensitive to outliers and may cause wrong eigendirection. In this paper, we propose techniques to solve this problem; we use the data-centering method and reestimate the covariance matrix using robust statistic techniques such as median, robust scaling which is a booster to data-centering and Huber M-estimator which measures the presentation of outliers and reweight them with small values. The results on several real world data sets show that our proposed method handles outliers and gains better results than the original PCA and provides the same accuracy with lower computation cost than the Kernel PCA using the polynomial kernel in classification tasks. ",Robust Principal Component Analysis Using Statistical Estimators " A new generalized multilinear regression model, termed the Higher-Order Partial Least Squares (HOPLS), is introduced with the aim to predict a tensor (multiway array) $\tensor{Y}$ from a tensor $\tensor{X}$ through projecting the data onto the latent space and performing regression on the corresponding latent variables. HOPLS differs substantially from other regression models in that it explains the data by a sum of orthogonal Tucker tensors, while the number of orthogonal loadings serves as a parameter to control model complexity and prevent overfitting. The low dimensional latent space is optimized sequentially via a deflation operation, yielding the best joint subspace approximation for both $\tensor{X}$ and $\tensor{Y}$. Instead of decomposing $\tensor{X}$ and $\tensor{Y}$ individually, higher order singular value decomposition on a newly defined generalized cross-covariance tensor is employed to optimize the orthogonal loadings. A systematic comparison on both synthetic data and real-world decoding of 3D movement trajectories from electrocorticogram (ECoG) signals demonstrate the advantages of HOPLS over the existing methods in terms of better predictive ability, suitability to handle small sample sizes, and robustness to noise. ","Higher-Order Partial Least Squares (HOPLS): A Generalized Multi-Linear Regression Method" " While POMDPs provide a general platform for non-deterministic conditional planning under a variety of quality metrics they have limited scalability. On the other hand, non-deterministic conditional planners scale very well, but many lack the ability to optimize plan quality metrics. We present a novel generalization of planning graph based heuristics that helps conditional planners both scale and generate high quality plans when using actions with nonuniform costs. We make empirical comparisons with two state of the art planners to show the benefit of our techniques. ",Cost Sensitive Reachability Heuristics for Handling State Uncertainty " With the aid of the concept of stable independence we can construct, in an efficient way, a compact representation of a semi-graphoid independence relation. We show that this representation provides a new necessary condition for the existence of a directed perfect map for the relation. The test for this condition is based to a large extent on the transitivity property of a special form of d-separation. The complexity of the test is linear in the size of the representation. The test, moreover, brings the additional benefit that it can be used to guide the early stages of network construction. ",Stable Independence in Perfect Maps " Many real world sequences such as protein secondary structures or shell logs exhibit a rich internal structures. Traditional probabilistic models of sequences, however, consider sequences of flat symbols only. Logical hidden Markov models have been proposed as one solution. They deal with logical sequences, i.e., sequences over an alphabet of logical atoms. This comes at the expense of a more complex model selection problem. Indeed, different abstraction levels have to be explored. In this paper, we propose a novel method for selecting logical hidden Markov models from data called SAGEM. SAGEM combines generalized expectation maximization, which optimizes parameters, with structure search for model selection using inductive logic programming refinement operators. We provide convergence and experimental results that show SAGEM's effectiveness. ",'Say EM' for Selecting Probabilistic Models for Logical Sequences " Intelligent systems in an open world must reason about many interacting entities related to each other in diverse ways and having uncertain features and relationships. Traditional probabilistic languages lack the expressive power to handle relational domains. Classical first-order logic is sufficiently expressive, but lacks a coherent plausible reasoning capability. Recent years have seen the emergence of a variety of approaches to integrating first-order logic, probability, and machine learning. This paper presents Multi-entity Bayesian networks (MEBN), a formal system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures, and can express a probability distribution over models of any consistent, finitely axiomatizable first-order theory. We present the logic using an example inspired by the Paramount Series StarTrek. ",Of Starships and Klingons: Bayesian Logic for the 23rd Century " In this paper we present a differential semantics of Lazy AR Propagation (LARP) in discrete Bayesian networks. We describe how both single and multi dimensional partial derivatives of the evidence may easily be calculated from a junction tree in LARP equilibrium. We show that the simplicity of the calculations stems from the nature of LARP. Based on the differential semantics we describe how variable propagation in the LARP architecture may give access to additional partial derivatives. The cautious LARP (cLARP) scheme is derived to produce a flexible cLARP equilibrium that offers additional opportunities for calculating single and multidimensional partial derivatives of the evidence and subsets of the evidence from a single propagation. The results of an empirical evaluation illustrates how the access to a largely increased number of partial derivatives comes at a low computational cost. ",A Differential Semantics of Lazy AR Propagation " This paper deals with the following problem: modify a Bayesian network to satisfy a given set of probability constraints by only change its conditional probability tables, and the probability distribution of the resulting network should be as close as possible to that of the original network. We propose to solve this problem by extending IPFP (iterative proportional fitting procedure) to probability distributions represented by Bayesian networks. The resulting algorithm E-IPFP is further developed to D-IPFP, which reduces the computational cost by decomposing a global EIPFP into a set of smaller local E-IPFP problems. Limited analysis is provided, including the convergence proofs of the two algorithms. Computer experiments were conducted to validate the algorithms. The results are consistent with the theoretical analysis. ",Modifying Bayesian Networks by Probability Constraints " Studying the effects of one-way variation of any number of parameters on any number of output probabilities quickly becomes infeasible in practice, especially if various evidence profiles are to be taken into consideration. To provide for identifying the parameters that have a potentially large effect prior to actually performing the analysis, we need properties of sensitivity functions that are independent of the network under study, of the available evidence, or of both. In this paper, we study properties that depend upon just the probability of the entered evidence. We demonstrate that these properties provide for establishing an upper bound on the sensitivity value for a parameter; they further provide for establishing the region in which the vertex of the sensitivity function resides, thereby serving to identify parameters with a low sensitivity value that may still have a large impact on the probability of interest for relatively small parameter variations. ",Exploiting Evidence-dependent Sensitivity Bounds " We present multi-agent A* (MAA*), the first complete and optimal heuristic search algorithm for solving decentralized partially-observable Markov decision problems (DEC-POMDPs) with finite horizon. The algorithm is suitable for computing optimal plans for a cooperative group of agents that operate in a stochastic environment such as multirobot coordination, network traffic control, `or distributed resource allocation. Solving such problems efiectively is a major challenge in the area of planning under uncertainty. Our solution is based on a synthesis of classical heuristic search and decentralized control theory. Experimental results show that MAA* has significant advantages. We introduce an anytime variant of MAA* and conclude with a discussion of promising extensions such as an approach to solving infinite horizon problems. ",MAA*: A Heuristic Search Algorithm for Solving Decentralized POMDPs " Inferring from inconsistency and making decisions are two problems which have always been treated separately by researchers in Artificial Intelligence. Consequently, different models have been proposed for each category. Different argumentation systems [2, 7, 10, 11] have been developed for handling inconsistency in knowledge bases. Recently, other argumentation systems [3, 4, 8] have been defined for making decisions under uncertainty. The aim of this paper is to present a general argumentation framework in which both inferring from inconsistency and decision making are captured. The proposed framework can be used for decision under uncertainty, multiple criteria decision, rule-based decision and finally case-based decision. Moreover, works on classical decision suppose that the information about environment is coherent, and this no longer required by this general framework. ","A unified setting for inference and decision: An argumentation-based approach" " When a hybrid Bayesian network has conditionally deterministic variables with continuous parents, the joint density function for the continuous variables does not exist. Conditional linear Gaussian distributions can handle such cases when the continuous variables have a multi-variate normal distribution and the discrete variables do not have continuous parents. In this paper, operations required for performing inference with conditionally deterministic variables in hybrid Bayesian networks are developed. These methods allow inference in networks with deterministic variables where continuous variables may be non-Gaussian, and their density functions can be approximated by mixtures of truncated exponentials. There are no constraints on the placement of continuous and discrete nodes in the network. ",Hybrid Bayesian Networks with Linear Deterministic Variables " We consider the problem of deleting edges from a Bayesian network for the purpose of simplifying models in probabilistic inference. In particular, we propose a new method for deleting network edges, which is based on the evidence at hand. We provide some interesting bounds on the KL-divergence between original and approximate networks, which highlight the impact of given evidence on the quality of approximation and shed some light on good and bad candidates for edge deletion. We finally demonstrate empirically the promise of the proposed edge deletion technique as a basis for approximate inference. ",On Bayesian Network Approximation by Edge Deletion " We define the notion of compiling a Bayesian network with evidence and provide a specific approach for evidence-based compilation, which makes use of logical processing. The approach is practical and advantageous in a number of application areas-including maximum likelihood estimation, sensitivity analysis, and MAP computations-and we provide specific empirical results in the domain of genetic linkage analysis. We also show that the approach is applicable for networks that do not contain determinism, and show that it empirically subsumes the performance of the quickscore algorithm when applied to noisy-or networks. ",Exploiting Evidence in Probabilistic Inference " A model of the world built from sensor data may be incorrect even if the sensors are functioning correctly. Possible causes include the use of inappropriate sensors (e.g. a laser looking through glass walls), sensor inaccuracies accumulate (e.g. localization errors), the a priori models are wrong, or the internal representation does not match the world (e.g. a static occupancy grid used with dynamically moving objects). We are interested in the case where the constructed model of the world is flawed, but there is no access to the ground truth that would allow the system to see the discrepancy, such as a robot entering an unknown environment. This paper considers the problem of determining when something is wrong using only the sensor data used to construct the world model. It proposes 11 interpretation inconsistency indicators based on the Dempster-Shafer conflict metric, Con, and evaluates these indicators according to three criteria: ability to distinguish true inconsistency from sensor noise (classification), estimate the magnitude of discrepancies (estimation), and determine the source(s) (if any) of sensing problems in the environment (isolation). The evaluation is conducted using data from a mobile robot with sonar and laser range sensors navigating indoor environments under controlled conditions. The evaluation shows that the Gambino indicator performed best in terms of estimation (at best 0.77 correlation), isolation, and classification of the sensing situation as degraded (7% false negative rate) or normal (0% false positive rate). ","Use of Dempster-Shafer Conflict Metric to Detect Interpretation Inconsistency" " The Bayesian Logic (BLOG) language was recently developed for defining first-order probability models over worlds with unknown numbers of objects. It handles important problems in AI, including data association and population estimation. This paper extends BLOG by adopting generative processes over function spaces - known as nonparametrics in the Bayesian literature. We introduce syntax for reasoning about arbitrary collections of objects, and their properties, in an intuitive manner. By exploiting exchangeability, distributions over unknown objects and their attributes are cast as Dirichlet processes, which resolve difficulties in model selection and inference caused by varying numbers of objects. We demonstrate these concepts with application to citation matching. ",Nonparametric Bayesian Logic " We propose an efficient algorithm for estimation of possibility based qualitative expected utility. It is useful for decision making mechanisms where each possible decision is assigned a multi-attribute possibility distribution. The computational complexity of ordinary methods calculating the expected utility based on discretization is growing exponentially with the number of attributes, and may become infeasible with a high number of these attributes. We present series of theorems and lemmas proving the correctness of our algorithm that exibits a linear computational complexity. Our algorithm has been applied in the context of selecting the most prospective partners in multi-party multi-attribute negotiation, and can also be used in making decisions about potential offers during the negotiation as other similar problems. ","Efficient algorithm for estimation of qualitative expected utility in possibilistic case-based reasoning" " The local Markov condition for a DAG to be an independence map of a probability distribution is well known. For DAGs with latent variables, represented as bi-directed edges in the graph, the local Markov property may invoke exponential number of conditional independencies. This paper shows that the number of conditional independence relations required may be reduced if the probability distributions satisfy the composition axiom. In certain types of graphs, only linear number of conditional independencies are required. The result has applications in testing linear structural equation models with correlated errors. ",Local Markov Property for Models Satisfying Composition Axiom " We present a framework to discover and characterize different classes of everyday activities from event-streams. We begin by representing activities as bags of event n-grams. This allows us to analyze the global structural information of activities, using their local event statistics. We demonstrate how maximal cliques in an undirected edge-weighted graph of activities, can be used for activity-class discovery in an unsupervised manner. We show how modeling an activity as a variable length Markov process, can be used to discover recurrent event-motifs to characterize the discovered activity-classes. We present results over extensive data-sets, collected from multiple active environments, to show the competence and generalizability of our proposed framework. ",Unsupervised Activity Discovery and Characterization From Event-Streams " This paper describes a general framework called Hybrid Dynamic Mixed Networks (HDMNs) which are Hybrid Dynamic Bayesian Networks that allow representation of discrete deterministic information in the form of constraints. We propose approximate inference algorithms that integrate and adjust well known algorithmic principles such as Generalized Belief Propagation, Rao-Blackwellised Particle Filtering and Constraint Propagation to address the complexity of modeling and reasoning in HDMNs. We use this framework to model a person's travel activity over time and to predict destination and routes given the current location. We present a preliminary empirical evaluation demonstrating the effectiveness of our modeling framework and algorithms using several variants of the activity model. ",Modeling Transportation Routines using Hybrid Dynamic Mixed Networks " In this paper, we consider Hybrid Mixed Networks (HMN) which are Hybrid Bayesian Networks that allow discrete deterministic information to be modeled explicitly in the form of constraints. We present two approximate inference algorithms for HMNs that integrate and adjust well known algorithmic principles such as Generalized Belief Propagation, Rao-Blackwellised Importance Sampling and Constraint Propagation to address the complexity of modeling and reasoning in HMNs. We demonstrate the performance of our approximate inference algorithms on randomly generated HMNs. ","Approximate Inference Algorithms for Hybrid Bayesian Networks with Discrete Constraints" " We present metrics for measuring state similarity in Markov decision processes (MDPs) with infinitely many states, including MDPs with continuous state spaces. Such metrics provide a stable quantitative analogue of the notion of bisimulation for MDPs, and are suitable for use in MDP approximation. We show that the optimal value function associated with a discounted infinite horizon planning task varies continuously with respect to our metric distances. ",Metrics for Markov Decision Processes with Infinite State Spaces " Decision-theoretic planning with risk-sensitive planning objectives is important for building autonomous agents or decision-support systems for real-world applications. However, this line of research has been largely ignored in the artificial intelligence and operations research communities since planning with risk-sensitive planning objectives is more complicated than planning with risk-neutral planning objectives. To remedy this situation, we derive conditions that guarantee that the optimal expected utilities of the total plan-execution reward exist and are finite for fully observable Markov decision process models with non-linear utility functions. In case of Markov decision process models with both positive and negative rewards, most of our results hold for stationary policies only, but we conjecture that they can be generalized to non stationary policies. ","Existence and Finiteness Conditions for Risk-Sensitive Planning: Results and Conjectures" " A fundamental issue in real-world systems, such as sensor networks, is the selection of observations which most effectively reduce uncertainty. More specifically, we address the long standing problem of nonmyopically selecting the most informative subset of variables in a graphical model. We present the first efficient randomized algorithm providing a constant factor (1-1/e-epsilon) approximation guarantee for any epsilon > 0 with high confidence. The algorithm leverages the theory of submodular functions, in combination with a polynomial bound on sample complexity. We furthermore prove that no polynomial time algorithm can provide a constant factor approximation better than (1 - 1/e) unless P = NP. Finally, we provide extensive evidence of the effectiveness of our method on two complex real-world datasets. ",Near-optimal Nonmyopic Value of Information in Graphical Models " In this paper, we propose a revision-based approach for conflict resolution by generalizing the Disjunctive Maxi-Adjustment (DMA) approach (Benferhat et al. 2004). Revision operators can be classified into two different families: the model-based ones and the formula-based ones. So the revision-based approach has two different versions according to which family of revision operators is chosen. Two particular revision operators are considered, one is the Dalal's revision operator, which is a model-based revision operator, and the other is the cardinality-maximal based revision operator, which is a formulabased revision operator. When the Dalal's revision operator is chosen, the revision-based approach is independent of the syntactic form in each stratum and it captures some notion of minimal change. When the cardinalitymaximal based revision operator is chosen, the revision-based approach is equivalent to the DMA approach. We also show that both approaches are computationally easier than the DMA approach. ",A Revision-Based Approach to Resolving Conflicting Information " Systems such as sensor networks and teams of autonomous robots consist of multiple autonomous entities that interact with each other in a distributed, asynchronous manner. These entities need to keep track of the state of the system as it evolves. Asynchronous systems lead to special challenges for monitoring, as nodes must update their beliefs independently of each other and no central coordination is possible. Furthermore, the state of the system continues to change as beliefs are being updated. Previous approaches to developing distributed asynchronous probabilistic reasoning systems have used static models. We present an approach using dynamic models, that take into account the way the system changes state over time. Our approach, which is based on belief propagation, is fully distributed and asynchronous, and allows the world to keep on changing as messages are being sent around. Experimental results show that our approach compares favorably to the factored frontier algorithm. ",Asynchronous Dynamic Bayesian Networks " Continuous time Bayesian networks (CTBNs) describe structured stochastic processes with finitely many states that evolve over continuous time. A CTBN is a directed (possibly cyclic) dependency graph over a set of variables, each of which represents a finite state continuous time Markov process whose transition model is a function of its parents. As shown previously, exact inference in CTBNs is intractable. We address the problem of approximate inference, allowing for general queries conditioned on evidence over continuous time intervals and at discrete time points. We show how CTBNs can be parameterized within the exponential family, and use that insight to develop a message passing scheme in cluster graphs and allows us to apply expectation propagation to CTBNs. The clusters in our cluster graph do not contain distributions over the cluster variables at individual time points, but distributions over trajectories of the variables throughout a duration. Thus, unlike discrete time temporal models such as dynamic Bayesian networks, we can adapt the time granularity at which we reason for different variables and in different conditions. ",Expectation Propagation for Continuous Time Bayesian Networks " Continuous time Bayesian networks (CTBNs) describe structured stochastic processes with finitely many states that evolve over continuous time. A CTBN is a directed (possibly cyclic) dependency graph over a set of variables, each of which represents a finite state continuous time Markov process whose transition model is a function of its parents. We address the problem of learning the parameters and structure of a CTBN from partially observed data. We show how to apply expectation maximization (EM) and structural expectation maximization (SEM) to CTBNs. The availability of the EM algorithm allows us to extend the representation of CTBNs to allow a much richer class of transition durations distributions, known as phase distributions. This class is a highly expressive semi-parametric representation, which can approximate any duration distribution arbitrarily closely. This extension to the CTBN framework addresses one of the main limitations of both CTBNs and DBNs - the restriction to exponentially / geometrically distributed duration. We present experimental results on a real data set of people's life spans, showing that our algorithm learns reasonable models - structure and parameters - from partially observed data, and, with the use of phase distributions, achieves better performance than DBNs. ","Expectation Maximization and Complex Duration Distributions for Continuous Time Bayesian Networks" " We derive novel sufficient conditions for convergence of Loopy Belief Propagation (also known as the Sum-Product algorithm) to a unique fixed point. Our results improve upon previously known conditions. For binary variables with (anti-)ferromagnetic interactions, our conditions seem to be sharp. ",Sufficient conditions for convergence of Loopy Belief Propagation " In this paper we compare search and inference in graphical models through the new framework of AND/OR search. Specifically, we compare Variable Elimination (VE) and memoryintensive AND/OR Search (AO) and place algorithms such as graph-based backjumping and no-good and good learning, as well as Recursive Conditioning [7] and Value Elimination [2] within the AND/OR search framework. ",The Relationship Between AND/OR Search and Variable Elimination " This paper addresses a fundamental issue central to approximation methods for solving large Markov decision processes (MDPs): how to automatically learn the underlying representation for value function approximation? A novel theoretically rigorous framework is proposed that automatically generates geometrically customized orthonormal sets of basis functions, which can be used with any approximate MDP solver like least squares policy iteration (LSPI). The key innovation is a coordinate-free representation of value functions, using the theory of smooth functions on a Riemannian manifold. Hodge theory yields a constructive method for generating basis functions for approximating value functions based on the eigenfunctions of the self-adjoint (Laplace-Beltrami) operator on manifolds. In effect, this approach performs a global Fourier analysis on the state space graph to approximate value functions, where the basis functions reflect the largescale topology of the underlying state space. A new class of algorithms called Representation Policy Iteration (RPI) are presented that automatically learn both basis functions and approximately optimal policies. Illustrative experiments compare the performance of RPI with that of LSPI using two handcoded basis functions (RBF and polynomial state encodings). ",Representation Policy Iteration " Existing complexity bounds for point-based POMDP value iteration algorithms focus either on the curse of dimensionality or the curse of history. We derive a new bound that relies on both and uses the concept of discounted reachability; our conclusions may help guide future algorithm design. We also discuss recent improvements to our (point-based) heuristic search value iteration algorithm. Our new implementation calculates tighter initial bounds, avoids solving linear programs, and makes more effective use of sparsity. ",Point-Based POMDP Algorithms: Improved Analysis and Implementation " We introduce a new approximate solution technique for first-order Markov decision processes (FOMDPs). Representing the value function linearly w.r.t. a set of first-order basis functions, we compute suitable weights by casting the corresponding optimization as a first-order linear program and show how off-the-shelf theorem prover and LP software can be effectively used. This technique allows one to solve FOMDPs independent of a specific domain instantiation; furthermore, it allows one to determine bounds on approximation error that apply equally to all domain instantiations. We apply this solution technique to the task of elevator scheduling with a rich feature space and multi-criteria additive reward, and demonstrate that it outperforms a number of intuitive, heuristicallyguided policies. ",Approximate Linear Programming for First-order MDPs " Models of dynamical systems based on predictive state representations (PSRs) are defined strictly in terms of observable quantities, in contrast with traditional models (such as Hidden Markov Models) that use latent variables or statespace representations. In addition, PSRs have an effectively infinite memory, allowing them to model some systems that finite memory-based models cannot. Thus far, PSR models have primarily been developed for domains with discrete observations. Here, we develop the Predictive Linear-Gaussian (PLG) model, a class of PSR models for domains with continuous observations. We show that PLG models subsume Linear Dynamical System models (also called Kalman filter models or state-space models) while using fewer parameters. We also introduce an algorithm to estimate PLG parameters from data, and contrast it with standard Expectation Maximization (EM) algorithms used to estimate Kalman filter parameters. We show that our algorithm is a consistent estimation procedure and present preliminary empirical results suggesting that our algorithm outperforms EM, particularly as the model dimension increases. ",Predictive Linear-Gaussian Models of Stochastic Dynamical Systems " We consider the problem of diagnosing faults in a system represented by a Bayesian network, where diagnosis corresponds to recovering the most likely state of unobserved nodes given the outcomes of tests (observed nodes). Finding an optimal subset of tests in this setting is intractable in general. We show that it is difficult even to compute the next most-informative test using greedy test selection, as it involves several entropy terms whose exact computation is intractable. We propose an approximate approach that utilizes the loopy belief propagation infrastructure to simultaneously compute approximations of marginal and conditional entropies on multiple subsets of nodes. We apply our method to fault diagnosis in computer networks, and show the algorithm to be very effective on realistic Internet-like topologies. We also provide theoretical justification for the greedy test selection approach, along with some performance guarantees. ",Efficient Test Selection in Active Diagnosis via Entropy Approximation " One of the main problems of importance sampling in Bayesian networks is representation of the importance function, which should ideally be as close as possible to the posterior joint distribution. Typically, we represent an importance function as a factorization, i.e., product of conditional probability tables (CPTs). Given diagnostic evidence, we do not have explicit forms for the CPTs in the networks. We first derive the exact form for the CPTs of the optimal importance function. Since the calculation is hard, we usually only use their approximations. We review several popular strategies and point out their limitations. Based on an analysis of the influence of evidence, we propose a method for approximating the exact form of importance function by explicitly modeling the most important additional dependence relations introduced by evidence. Our experimental results show that the new approximation strategy offers an immediate improvement in the quality of the importance function. ","Importance Sampling in Bayesian Networks: An Influence-Based Approximation Strategy for Importance Functions" " GBP and EP are two successful algorithms for approximate probabilistic inference, which are based on different approximation strategies. An open problem in both algorithms has been how to choose an appropriate approximation structure. We introduce 'structured region graphs', a formalism which marries these two strategies, reveals a deep connection between them, and suggests how to choose good approximation structures. In this formalism, each region has an internal structure which defines an exponential family, whose sufficient statistics must be matched by the parent region. Reduction operators on these structures allow conversion between EP and GBP free energies. Thus it is revealed that all EP approximations on discrete variables are special cases of GBP, and conversely that some wellknown GBP approximations, such as overlapping squares, are special cases of EP. Furthermore, region graphs derived from EP have a number of good structural properties, including maxent-normality and overall counting number of one. The result is a convenient framework for producing high-quality approximations with a user-adjustable level of complexity ",Structured Region Graphs: Morphing EP into GBP " In recent years, there has been an increased need for the use of active systems - systems required to act automatically based on events, or changes in the environment. Such systems span many areas, from active databases to applications that drive the core business processes of today's enterprises. However, in many cases, the events to which the system must respond are not generated by monitoring tools, but must be inferred from other events based on complex temporal predicates. In addition, in many applications, such inference is inherently uncertain. In this paper, we introduce a formal framework for knowledge representation and reasoning enabling such event inference. Based on probability theory, we define the representation of the associated uncertainty. In addition, we formally define the probability space, and show how the relevant probabilities can be calculated by dynamically constructing a Bayesian network. To the best of our knowledge, this is the first work that enables taking such uncertainty into account in the context of active systems. herefore, our contribution is twofold: We formally define the representation and semantics of event composition for probabilistic settings, and show how to apply these extensions to the quantification of the occurrence probability of events. These results enable any active system to handle such uncertainty. ",A Model for Reasoning with Uncertain Rules in Event Composition Systems " The feature of our method different from other fuzzy grey relation method for supermixed multiple attribute group decision-making is that all of the subjective and objective weights are obtained by interval grey number and that the group decisionmaking is performed based on the relative approach degree of grey TOPSIS, the relative approach degree of grey incidence and the relative membership degree of grey incidence using 4-dimensional Euclidean distance. The weighted Borda method is used to obtain final rank by using the results of four methods. An example shows the applicability of the proposed approach. ","Super-Mixed Multiple Attribute Group Decision Making Method Based on Hybrid Fuzzy Grey Relation Approach Degree" " The multiple attribute mixed type decision making is performed by four methods, that is, the relative approach degree of grey TOPSIS method, the relative approach degree of grey incidence, the relative membership degree of grey incidence and the grey relation relative approach degree method using the maximum entropy estimation, respectively. In these decision making methods, the grey incidence degree in four-dimensional Euclidean space is used. The final arrangement result is obtained by weighted Borda method. An example illustrates the applicability of the proposed approach. ","Generalized Hybrid Grey Relation Method for Multiple Attribute Mixed Type Decision Making" " We revisit the SeqBin constraint. This meta-constraint subsumes a number of important global constraints like Change, Smooth and IncreasingNValue. We show that the previously proposed filtering algorithm for SeqBin has two drawbacks even under strong restrictions: it does not detect bounds disentailment and it is not idempotent. We identify the cause for these problems, and propose a new propagator that overcomes both issues. Our algorithm is based on a connection to the problem of finding a path of a given cost in a restricted $n$-partite graph. Our propagator enforces domain consistency in O(nd^2) and, for special cases of SeqBin that include Change, Smooth and IncreasingNValue, in O(nd) time. ",The SeqBin Constraint Revisited " This article introduces the benefits of using computer as a tool for foreign language teaching and learning. It describes the effect of using Natural Language Processing (NLP) tools for learning Arabic. The technique explored in this particular case is the employment of pedagogically indexed corpora. This text-based method provides the teacher the advantage of building activities based on texts adapted to a particular pedagogical situation. This paper also presents ARAC: a Platform dedicated to language educators allowing them to create activities within their own pedagogical area of interest. ",Arabic CALL system based on pedagogically indexed text " This article describes different models based on Bayesian networks RB modeling expertise in the diagnosis of brain tumors. Indeed, they are well adapted to the representation of the uncertainty in the process of diagnosis of these tumors. In our work, we first tested several structures derived from the Bayesian network reasoning performed by doctors on the one hand and structures generated automatically on the other. This step aims to find the best structure that increases diagnostic accuracy. The machine learning algorithms relate MWST-EM algorithms, SEM and SEM + T. To estimate the parameters of the Bayesian network from a database incomplete, we have proposed an extension of the EM algorithm by adding a priori knowledge in the form of the thresholds calculated by the first phase of the algorithm RBE . The very encouraging results obtained are discussed at the end of the paper ","Etude de Mod\`eles \`a base de r\'eseaux Bay\'esiens pour l'aide au diagnostic de tumeurs c\'er\'ebrales" " This paper proposes a grey interval relation TOPSIS for the decision making in which all of the attribute weights and attribute values are given by the interval grey numbers. The feature of our method different from other grey relation decision-making is that all of the subjective and objective weights are obtained by interval grey number and that decisionmaking is performed based on the relative approach degree of grey TOPSIS, the relative approach degree of grey incidence and the relative membership degree of grey incidence using 2-dimensional Euclidean distance. The weighted Borda method is used for combining the results of three methods. An example shows the applicability of the proposed approach. ","Novel Grey Interval Weight Determining and Hybrid Grey Interval Relation Method in Multiple Attribute Decision-Making" " Symbolic event recognition systems have been successfully applied to a variety of application domains, extracting useful information in the form of events, allowing experts or other systems to monitor and respond when significant events are recognised. In a typical event recognition application, however, these systems often have to deal with a significant amount of uncertainty. In this paper, we address the issue of uncertainty in logic-based event recognition by extending the Event Calculus with probabilistic reasoning. Markov Logic Networks are a natural candidate for our logic-based formalism. However, the temporal semantics of the Event Calculus introduce a number of challenges for the proposed model. We show how and under what assumptions we can overcome these problems. Additionally, we study how probabilistic modelling changes the behaviour of the formalism, affecting its key property, the inertia of fluents. Furthermore, we demonstrate the advantages of the probabilistic Event Calculus through examples and experiments in the domain of activity recognition, using a publicly available dataset for video surveillance. ",Probabilistic Event Calculus for Event Recognition " Understanding the structure and dynamics of biological networks is one of the important challenges in system biology. In addition, increasing amount of experimental data in biological networks necessitate the use of efficient methods to analyze these huge amounts of data. Such methods require to recognize common patterns to analyze data. As biological networks can be modeled by graphs, the problem of common patterns recognition is equivalent with frequent sub graph mining in a set of graphs. In this paper, at first the challenges of frequent subgrpahs mining in biological networks are introduced and the existing approaches are classified for each challenge. then the algorithms are analyzed on the basis of the type of the approach they apply for each of the challenges. ","Classification of Approaches and Challenges of Frequent Subgraphs Mining in Biological Networks" " This paper proposes a grey interval relation TOPSIS method for the decision making in which all of the attribute weights and attribute values are given by the interval grey numbers. In this paper, all of the subjective and objective weights are obtained by interval grey number and decision-making is based on four methods such as the relative approach degree of grey TOPSIS, the relative approach degree of grey incidence and the relative approach degree method using the maximum entropy estimation using 2-dimensional Euclidean distance. A multiple attribute decision-making example for evaluation of artistic talent of Kayagum (stringed Korean harp) players is given to show practicability of the proposed approach. ","Hybrid Grey Interval Relation Decision-Making in Artistic Talent Evaluation of Player" " The behavior composition problem involves automatically building a controller that is able to realize a desired, but unavailable, target system (e.g., a house surveillance) by suitably coordinating a set of available components (e.g., video cameras, blinds, lamps, a vacuum cleaner, phones, etc.) Previous work has almost exclusively aimed at bringing about the desired component in its totality, which is highly unsatisfactory for unsolvable problems. In this work, we develop an approach for approximate behavior composition without departing from the classical setting, thus making the problem applicable to a much wider range of cases. Based on the notion of simulation, we characterize what a maximal controller and the ""closest"" implementable target module (optimal approximation) are, and show how these can be computed using ATL model checking technology for a special case. We show the uniqueness of optimal approximations, and prove their soundness and completeness with respect to their imported controllers. ",Qualitative Approximate Behavior Composition " We propose a variant of Alternating-time Temporal Logic (ATL) grounded in the agents' operational know-how, as defined by their libraries of abstract plans. Inspired by ATLES, a variant itself of ATL, it is possible in our logic to explicitly refer to ""rational"" strategies for agents developed under the Belief-Desire-Intention agent programming paradigm. This allows us to express and verify properties of BDI systems using ATL-type logical frameworks. ",Reasoning about Agent Programs using ATL-like Logics " We consider the problem of computing optimal generalised policies for relational Markov decision processes. We describe an approach combining some of the benefits of purely inductive techniques with those of symbolic dynamic programming methods. The latter reason about the optimal value function using first-order decision theoretic regression and formula rewriting, while the former, when provided with a suitable hypotheses language, are capable of generalising value functions or policies for small instances. Our idea is to use reasoning and in particular classical first-order regression to automatically generate a hypotheses language dedicated to the domain at hand, which is then used as input by an inductive solver. This approach avoids the more complex reasoning of symbolic dynamic programming while focusing the inductive solver's attention on concepts that are specifically relevant to the optimal value function for the domain considered. ",Exploiting First-Order Regression in Inductive Policy Selection " This paper proposes a decision theory for a symbolic generalization of probability theory (SP). Darwiche and Ginsberg [2,3] proposed SP to relax the requirement of using numbers for uncertainty while preserving desirable patterns of Bayesian reasoning. SP represents uncertainty by symbolic supports that are ordered partially rather than completely as in the case of standard probability. We show that a preference relation on acts that satisfies a number of intuitive postulates is represented by a utility function whose domain is a set of pairs of supports. We argue that a subjective interpretation is as useful and appropriate for SP as it is for numerical probability. It is useful because the subjective interpretation provides a basis for uncertainty elicitation. It is appropriate because we can provide a decision theory that explains how preference on acts is based on support comparison. ",Decision Making for Symbolic Probability " We present metrics for measuring the similarity of states in a finite Markov decision process (MDP). The formulation of our metrics is based on the notion of bisimulation for MDPs, with an aim towards solving discounted infinite horizon reinforcement learning tasks. Such metrics can be used to aggregate states, as well as to better structure other value function approximators (e.g., memory-based or nearest-neighbor approximators). We provide bounds that relate our metric distances to the optimal values of states in the given MDP. ",Metrics for Finite Markov Decision Processes " We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently. ",Dynamic Programming for Structured Continuous Markov Decision Problems " We present a major improvement to the incremental pruning algorithm for solving partially observable Markov decision processes. Our technique targets the cross-sum step of the dynamic programming (DP) update, a key source of complexity in POMDP algorithms. Instead of reasoning about the whole belief space when pruning the cross-sums, our algorithm divides the belief space into smaller regions and performs independent pruning in each region. We evaluate the benefits of the new technique both analytically and experimentally, and show that it produces very significant performance gains. The results contribute to the scalability of POMDP algorithms to domains that cannot be handled by the best existing techniques. ",Region-Based Incremental Pruning for POMDPs " The aim of this work is to provide a unified framework for ordinal representations of uncertainty lying at the crosswords between possibility and probability theories. Such confidence relations between events are commonly found in monotonic reasoning, inconsistency management, or qualitative decision theory. They start either from probability theory, making it more qualitative, or from possibility theory, making it more expressive. We show these two trends converge to a class of genuine probability theories. We provide characterization results for these useful tools that preserve the qualitative nature of possibility rankings, while enjoying the power of expressivity of additive representations. ",A Unified framework for order-of-magnitude confidence relations " The paper introduces mixed networks, a new framework for expressing and reasoning with probabilistic and deterministic information. The framework combines belief networks with constraint networks, defining the semantics and graphical representation. We also introduce the AND/OR search space for graphical models, and develop a new linear space search algorithm. This provides the basis for understanding the benefits of processing the constraint information separately, resulting in the pruning of the search space. When the constraint part is tractable or has a small number of solutions, using the mixed representation can be exponentially more effective than using pure belief networks which odel constraints as conditional probability tables. ","Mixtures of Deterministic-Probabilistic Networks and their AND/OR Search Space" " The representation of independence relations generally builds upon the well-known semigraphoid axioms of independence. Recently, a representation has been proposed that captures a set of dominant statements of an independence relation from which any other statement can be generated by means of the axioms; the cardinality of this set is taken to indicate the complexity of the relation. Building upon the idea of dominance, we introduce the concept of stability to provide for a more compact representation of independence. We give an associated algorithm for establishing such a representation.We show that, with our concept of stability, many independence relations are found to be of lower complexity than with existing representations. ",Stable Independance and Complexity of Representation " This paper investigates a representation language with flexibility inspired by probabilistic logic and compactness inspired by relational Bayesian networks. The goal is to handle propositional and first-order constructs together with precise, imprecise, indeterminate and qualitative probabilistic assessments. The paper shows how this can be achieved through the theory of credal networks. New exact and approximate inference algorithms based on multilinear programming and iterated/loopy propagation of interval probabilities are presented; their superior performance, compared to existing ones, is shown empirically. ","Propositional and Relational Bayesian Networks Associated with Imprecise and Qualitative Probabilistic Assesments" " Defeasible argumentation frameworks have evolved to become a sound setting to formalize commonsense, qualitative reasoning from incomplete and potentially inconsistent knowledge. Defeasible Logic Programming (DeLP) is a defeasible argumentation formalism based on an extension of logic programming. Although DeLP has been successfully integrated in a number of different real-world applications, DeLP cannot deal with explicit uncertainty, nor with vague knowledge, as defeasibility is directly encoded in the object language. This paper introduces P-DeLP, a new logic programming language that extends original DeLP capabilities for qualitative reasoning by incorporating the treatment of possibilistic uncertainty and fuzzy knowledge. Such features will be formalized on the basis of PGL, a possibilistic logic based on Godel fuzzy logic. ","A Logic Programming Framework for Possibilistic Argumentation with Vague Knowledge" " Previous work on sensitivity analysis in Bayesian networks has focused on single parameters, where the goal is to understand the sensitivity of queries to single parameter changes, and to identify single parameter changes that would enforce a certain query constraint. In this paper, we expand the work to multiple parameters which may be in the CPT of a single variable, or the CPTs of multiple variables. Not only do we identify the solution space of multiple parameter changes that would be needed to enforce a query constraint, but we also show how to find the optimal solution, that is, the one which disturbs the current probability distribution the least (with respect to a specific measure of disturbance). We characterize the computational complexity of our new techniques and discuss their applications to developing and debugging Bayesian networks, and to the problem of reasoning about the value (reliability) of new information. ","Sensitivity Analysis in Bayesian Networks: From Single to Multiple Parameters" " We consider the challenge of preference elicitation in systems that help users discover the most desirable item(s) within a given database. Past work on preference elicitation focused on structured models that provide a factored representation of users' preferences. Such models require less information to construct and support efficient reasoning algorithms. This paper makes two substantial contributions to this area: (1) Strong representation theorems for factored value functions. (2) A methodology that utilizes our representation results to address the problem of optimal item selection. ",Compact Value-Function Representations for Qualitative Preferences " Humans currently use arguments for explaining choices which are already made, or for evaluating potential choices. Each potential choice has usually pros and cons of various strengths. In spite of the usefulness of arguments in a decision making process, there have been few formal proposals handling this idea if we except works by Fox and Parsons and by Bonet and Geffner. In this paper we propose a possibilistic logic framework where arguments are built from an uncertain knowledge base and a set of prioritized goals. The proposed approach can compute two kinds of decisions by distinguishing between pessimistic and optimistic attitudes. When the available, maybe uncertain, knowledge is consistent, as well as the set of prioritized goals (which have to be fulfilled as far as possible), the method for evaluating decisions on the basis of arguments agrees with the possibility theory-based approach to decision-making under uncertainty. Taking advantage of its relation with formal approaches to defeasible argumentation, the proposed framework can be generalized in case of partially inconsistent knowledge, or goal bases. ",Using arguments for making decisions: A possibilistic logic approach " We introduce a probabilistic formalism subsuming Markov random fields of bounded tree width and probabilistic context free grammars. Our models are based on a representation of Boolean formulas that we call case-factor diagrams (CFDs). CFDs are similar to binary decision diagrams (BDDs) but are concise for circuits of bounded tree width (unlike BDDs) and can concisely represent the set of parse trees over a given string undera given context free grammar (also unlike BDDs). A probabilistic model consists of aCFD defining a feasible set of Boolean assignments and a weight (or cost) for each individual Boolean variable. We give an insideoutside algorithm for simultaneously computing the marginal of each Boolean variable, and a Viterbi algorithm for finding the mininum cost variable assignment. Both algorithms run in time proportional to the size of the CFD. ",Case-Factor Diagrams for Structured Probabilistic Modeling " Based on a recent development in the area of error control coding, we introduce the notion of convolutional factor graphs (CFGs) as a new class of probabilistic graphical models. In this context, the conventional factor graphs are referred to as multiplicative factor graphs (MFGs). This paper shows that CFGs are natural models for probability functions when summation of independent latent random variables is involved. In particular, CFGs capture a large class of linear models, where the linearity is in the sense that the observed variables are obtained as a linear ransformation of the latent variables taking arbitrary distributions. We use Gaussian models and independent factor models as examples to emonstrate the use of CFGs. The requirement of a linear transformation between latent variables (with certain independence restriction) and the bserved variables, to an extent, limits the modelling flexibility of CFGs. This structural restriction however provides a powerful analytic tool to the framework of CFGs; that is, upon taking the Fourier transform of the function represented by the CFG, the resulting function is represented by a FG with identical structure. This Fourier transform duality allows inference problems on a CFG to be solved on the corresponding dual MFG. ",Convolutional Factor Graphs as Probabilistic Models " As real-world Bayesian networks continue to grow larger and more complex, it is important to investigate the possibilities for improving the performance of existing algorithms of probabilistic inference. Motivated by examples, we investigate the dependency of the performance of Lazy propagation on the message computation algorithm. We show how Symbolic Probabilistic Inference (SPI) and Arc-Reversal (AR) can be used for computation of clique to clique messages in the addition to the traditional use of Variable Elimination (VE). In addition, the paper resents the results of an empirical evaluation of the performance of Lazy propagation using VE, SPI, and AR as the message computation algorithm. The results of the empirical evaluation show that for most networks, the performance of inference did not depend on the choice of message computation algorithm, but for some randomly generated networks the choice had an impact on both space and time performance. In the cases where the choice had an impact, AR produced the best results. ",An Empirical Evaluation of Possible Variations of Lazy Propagation " Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods cannot adequately address these problems. We present the first framework that can exploit problem structure for modeling and solving hybrid problems efficiently. We formulate these problems as hybrid Markov decision processes (MDPs with continuous and discrete state and action variables), which we assume can be represented in a factored way using a hybrid dynamic Bayesian network (hybrid DBN). This formulation also allows us to apply our methods to collaborative multiagent settings. We present a new linear program approximation method that exploits the structure of the hybrid MDP and lets us compute approximate value functions more efficiently. In particular, we describe a new factored discretization of continuous variables that avoids the exponential blow-up of traditional approaches. We provide theoretical bounds on the quality of such an approximation and on its scale-up potential. We support our theoretical arguments with experiments on a set of control problems with up to 28-dimensional continuous state space and 22-dimensional action space. ",Solving Factored MDPs with Continuous and Discrete Variables " Maximum a Posteriori assignment (MAP) is the problem of finding the most probable instantiation of a set of variables given the partial evidence on the other variables in a Bayesian network. MAP has been shown to be a NP-hard problem [22], even for constrained networks, such as polytrees [18]. Hence, previous approaches often fail to yield any results for MAP problems in large complex Bayesian networks. To address this problem, we propose AnnealedMAP algorithm, a simulated annealing-based MAP algorithm. The AnnealedMAP algorithm simulates a non-homogeneous Markov chain whose invariant function is a probability density that concentrates itself on the modes of the target density. We tested this algorithm on several real Bayesian networks. The results show that, while maintaining good quality of the MAP solutions, the AnnealedMAP algorithm is also able to solve many problems that are beyond the reach of previous approaches. ",Annealed MAP " For many real-life Bayesian networks, common knowledge dictates that the output established for the main variable of interest increases with higher values for the observable variables. We define two concepts of monotonicity to capture this type of knowledge. We say that a network is isotone in distribution if the probability distribution computed for the output variable given specific observations is stochastically dominated by any such distribution given higher-ordered observations; a network is isotone in mode if a probability distribution given higher observations has a higher mode. We show that establishing whether a network exhibits any of these properties of monotonicity is coNPPP-complete in general, and remains coNP-complete for polytrees. We present an approximate algorithm for deciding whether a network is monotone in distribution and illustrate its application to a real-life network in oncology. ",Monotonicity in Bayesian Networks " We present a novel POMDP planning algorithm called heuristic search value iteration (HSVI).HSVI is an anytime algorithm that returns a policy and a provable bound on its regret with respect to the optimal policy. HSVI gets its power by combining two well-known techniques: attention-focusing search heuristics and piecewise linear convex representations of the value function. HSVI's soundness and convergence have been proven. On some benchmark problems from the literature, HSVI displays speedups of greater than 100 with respect to other state-of-the-art POMDP value iteration algorithms. We also apply HSVI to a new rover exploration problem 10 times larger than most POMDP problems in the literature. ",Heuristic Search Value Iteration for POMDPs " We characterize probabilities in Bayesian networks in terms of algebraic expressions called quasi-probabilities. These are arrived at by casting Bayesian networks as noisy AND-OR-NOT networks, and viewing the subnetworks that lead to a node as arguments for or against a node. Quasi-probabilities are in a sense the ""natural"" algebra of Bayesian networks: we can easily compute the marginal quasi-probability of any node recursively, in a compact form; and we can obtain the joint quasi-probability of any set of nodes by multiplying their marginals (using an idempotent product operator). Quasi-probabilities are easily manipulated to improve the efficiency of probabilistic inference. They also turn out to be representable as square-wave pulse trains, and joint and marginal distributions can be computed by multiplication and complementation of pulse trains. ",A New Characterization of Probabilities in Bayesian Networks " The sensitivities revealed by a sensitivity analysis of a probabilistic network typically depend on the entered evidence. For a real-life network therefore, the analysis is performed a number of times, with different evidence. Although efficient algorithms for sensitivity analysis exist, a complete analysis is often infeasible because of the large range of possible combinations of observations. In this paper we present a method for studying sensitivities that are invariant to the evidence entered. Our method builds upon the idea of establishing bounds between which a parameter can be varied without ever inducing a change in the most likely value of a variable of interest. ",Evidence-invariant Sensitivity Bounds " We consider the problem of estimating the distribution underlying an observed sample of data. Instead of maximum likelihood, which maximizes the probability of the ob served values, we propose a different estimate, the high-profile distribution, which maximizes the probability of the observed profile the number of symbols appearing any given number of times. We determine the high-profile distribution of several data samples, establish some of its general properties, and show that when the number of distinct symbols observed is small compared to the data size, the high-profile and maximum-likelihood distributions are roughly the same, but when the number of symbols is large, the distributions differ, and high-profile better explains the data. ",On Modeling Profiles instead of Values " A diagnostic policy specifies what test to perform next, based on the results of previous tests, and when to stop and make a diagnosis. Cost-sensitive diagnostic policies perform tradeoffs between (a) the cost of tests and (b) the cost of misdiagnoses. An optimal diagnostic policy minimizes the expected total cost. We formalize this diagnosis process as a Markov Decision Process (MDP). We investigate two types of algorithms for solving this MDP: systematic search based on AO* algorithm and greedy search (particularly the Value of Information method). We investigate the issue of learning the MDP probabilities from examples, but only as they are relevant to the search for good policies. We do not learn nor assume a Bayesian network for the diagnosis process. Regularizers are developed to control overfitting and speed up the search. This research is the first that integrates overfitting prevention into systematic search. The paper has two contributions: it discusses the factors that make systematic search feasible for diagnosis, and it shows experimentally, on benchmark data sets, that systematic search methods produce better diagnostic policies than greedy methods. ",Learning Diagnostic Policies from Examples by Systematic Search " Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for representing continuous chance variables in influence diagrams. Also, MTE potentials can be used to approximate utility functions. This paper introduces MTE influence diagrams, which can represent decision problems without restrictions on the relationships between continuous and discrete chance variables, without limitations on the distributions of continuous chance variables, and without limitations on the nature of the utility functions. In MTE influence diagrams, all probability distributions and the joint utility function (or its multiplicative factors) are represented by MTE potentials and decision nodes are assumed to have discrete state spaces. MTE influence diagrams are solved by variable elimination using a fusion algorithm. ",Hybrid Influence Diagrams Using Mixtures of Truncated Exponentials " Straightedge and compass construction problems are one of the oldest and most challenging problems in elementary mathematics. The central challenge, for a human or for a computer program, in solving construction problems is a huge search space. In this paper we analyze one family of triangle construction problems, aiming at detecting a small core of the underlying geometry knowledge. The analysis leads to a small set of needed definitions, lemmas and primitive construction steps, and consequently, to a simple algorithm for automated solving of problems from this family. The same approach can be applied to other families of construction problems. ",Towards Understanding Triangle Construction Problems " In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available. ","The Arcade Learning Environment: An Evaluation Platform for General Agents" " Most merging operators are defined by semantics methods which have very high computational complexity. In order to have operators with a lower computational complexity, some merging operators defined in a syntactical way have be proposed. In this work we define some syntactical merging operators and exploring its rationality properties. To do that we constrain the belief bases to be sets of formulas very close to logic programs and the underlying logic is defined through forward chaining rule (Modus Ponens). We propose two types of operators: arbitration operators when the inputs are only two bases and fusion with integrity constraints operators. We introduce a set of postulates inspired of postulates LS, proposed by Liberatore and Shaerf and then we analyzed the first class of operators through these postulates. We also introduce a set of postulates inspired of postulates KP, proposed by Konieczny and Pino P\'erez and then we analyzed the second class of operators through these postulates. ","Exploring the rationality of some syntactic merging operators (extended version)" " The Direct Torque Control (DTC) is well known as an effective control technique for high performance drives in a wide variety of industrial applications and conventional DTC technique uses two constant reference value: torque and stator flux. In this paper, fuzzy logic based stator flux optimization technique for DTC drives that has been proposed. The proposed fuzzy logic based stator flux optimizer self-regulates the stator flux reference using induction motor load situation without need of any motor parameters. Simulation studies have been carried out with Matlab/Simulink to compare the proposed system behaviors at vary load conditions. Simulation results show that the performance of the proposed DTC technique has been improved and especially at low-load conditions torque ripple are greatly reduced with respect to the conventional DTC. ",Stator flux optimization on direct torque control with fuzzy logic " Sequential decision problems are often approximately solvable by simulating possible future action sequences. {\em Metalevel} decision procedures have been developed for selecting {\em which} action sequences to simulate, based on estimating the expected improvement in decision quality that would result from any particular simulation; an example is the recent work on using bandit algorithms to control Monte Carlo tree search in the game of Go. In this paper we develop a theoretical basis for metalevel decisions in the statistical framework of Bayesian {\em selection problems}, arguing (as others have done) that this is more appropriate than the bandit framework. We derive a number of basic results applicable to Monte Carlo selection problems, including the first finite sampling bounds for optimal policies in certain cases; we also provide a simple counterexample to the intuitive conjecture that an optimal policy will necessarily reach a decision in all cases. We then derive heuristic approximations in both Bayesian and distribution-free settings and demonstrate their superiority to bandit-based heuristics in one-shot decision problems and in Go. ",Selecting Computations: Theory and Applications " The rules of Sudoku are often specified using twenty seven \texttt{all\_different} constraints, referred to as the {\em big} \mrules. Using graphical proofs and exploratory logic programming, the following main and new result is obtained: many subsets of six of these big \mrules are redundant (i.e., they are entailed by the remaining twenty one \mrules), and six is maximal (i.e., removing more than six \mrules is not possible while maintaining equivalence). The corresponding result for binary inequality constraints, referred to as the {\em small} \mrules, is stated as a conjecture. ",Redundant Sudoku Rules " A recently identified problem is that of finding an optimal investment plan for a transportation network, given that a disaster such as an earthquake may destroy links in the network. The aim is to strengthen key links to preserve the expected network connectivity. A network based on the Istanbul highway system has thirty links and therefore a billion scenarios, but it has been estimated that sampling a million scenarios gives reasonable accuracy. In this paper we use symmetry reasoning to reduce the number of scenarios to a much smaller number, making sampling unnecessary. This result can be used to facilitate metaheuristic and exact approaches to the problem. ",Earthquake Scenario Reduction by Symmetry Reasoning " There is increasing awareness in the planning community that depending on complete models impedes the applicability of planning technology in many real world domains where the burden of specifying complete domain models is too high. In this paper, we consider a novel solution for this challenge that combines generative planning on incomplete domain models with a library of plan cases that are known to be correct. While this was arguably the original motivation for case-based planning, most existing case-based planners assume (and depend on) from-scratch planners that work on complete domain models. In contrast, our approach views the plan generated with respect to the incomplete model as a ""skeletal plan"" and augments it with directed mining of plan fragments from library cases. We will present the details of our approach and present an empirical evaluation of our method in comparison to a state-of-the-art case-based planner that depends on complete domain models. ",Model-Lite Case-Based Planning " Gas Transmission Networks are large-scale complex systems, and corresponding design and control problems are challenging. In this paper, we consider the problem of control and management of these systems in crisis situations. We present these networks by a hybrid systems framework that provides required analysis models. Further, we discuss decision-making using computational discrete and hybrid optimization methods. In particular, several reinforcement learning methods are employed to explore decision space and achieve the best policy in a specific crisis situation. Simulations are presented to illustrate the efficiency of the method. ",Hybrid systems modeling for gas transmission network " Fuzzy rule based classification systems are one of the most popular fuzzy modeling systems used in pattern classification problems. This paper investigates the effect of applying nine different T-norms in fuzzy rule based classification systems. In the recent researches, fuzzy versions of confidence and support merits from the field of data mining have been widely used for both rules selecting and weighting in the construction of fuzzy rule based classification systems. For calculating these merits the product has been usually used as a T-norm. In this paper different T-norms have been used for calculating the confidence and support measures. Therefore, the calculations in rule selection and rule weighting steps (in the process of constructing the fuzzy rule based classification systems) are modified by employing these T-norms. Consequently, these changes in calculation results in altering the overall accuracy of rule based classification systems. Experimental results obtained on some well-known data sets show that the best performance is produced by employing the Aczel-Alsina operator in terms of the classification accuracy, the second best operator is Dubois-Prade and the third best operator is Dombi. In experiments, we have used 12 data sets with numerical attributes from the University of California, Irvine machine learning repository (UCI). ",Comparison of different T-norm operators in classification problems " This paper presents a novel fuzzy logic based Adaptive Super-twisting Sliding Mode Controller for the control of dynamic uncertain systems. The proposed controller combines the advantages of Second order Sliding Mode Control, Fuzzy Logic Control and Adaptive Control. The reaching conditions, stability and robustness of the system with the proposed controller are guaranteed. In addition, the proposed controller is well suited for simple design and implementation. The effectiveness of the proposed controller over the first order Sliding Mode Fuzzy Logic controller is illustrated by Matlab based simulations performed on a DC-DC Buck converter. Based on this comparison, the proposed controller is shown to obtain the desired transient response without causing chattering and error under steady-state conditions. The proposed controller is able to give robust performance in terms of rejection to input voltage variations and load variations. ","A Novel Fuzzy Logic Based Adaptive Supertwisting Sliding Mode Control Algorithm for Dynamic Uncertain Systems" " This paper deals with the implementation of Least Mean Square (LMS) algorithm in Decision Feedback Equalizer (DFE) for removal of Inter Symbol Interference (ISI) at the receiver. The channel disrupts the transmitted signal by spreading it in time. Although, the LMS algorithm is robust and reliable, it is slow in convergence. In order to increase the speed of convergence, modifications have been made in the algorithm where the weights get updated depending on the severity of disturbance. ",Elimination of ISI Using Improved LMS Based Decision Feedback Equalizer " The early classifications of the computational complexity of planning under various restrictions in STRIPS (Bylander) and SAS+ (Baeckstroem and Nebel) have influenced following research in planning in many ways. We go back and reanalyse their subclasses, but this time using the more modern tool of parameterized complexity analysis. This provides new results that together with the old results give a more detailed picture of the complexity landscape. We demonstrate separation results not possible with standard complexity theory, which contributes to explaining why certain cases of planning have seemed simpler in practice than theory has predicted. In particular, we show that certain restrictions of practical interest are tractable in the parameterized sense of the term, and that a simple heuristic is sufficient to make a well-known partial-order planner exploit this fact. ",The Complexity of Planning Revisited - A Parameterized Analysis " Cumulative resource constraints can model scarce resources in scheduling problems or a dimension in packing and cutting problems. In order to efficiently solve such problems with a constraint programming solver, it is important to have strong and fast propagators for cumulative resource constraints. One such propagator is the recently developed time-table-edge-finding propagator, which considers the current resource profile during the edge-finding propagation. Recently, lazy clause generation solvers, i.e. constraint programming solvers incorporating nogood learning, have proved to be excellent at solving scheduling and cutting problems. For such solvers, concise and accurate explanations of the reasons for propagation are essential for strong nogood learning. In this paper, we develop the first explaining version of time-table-edge-finding propagation and show preliminary results on resource-constrained project scheduling problems from various standard benchmark suites. On the standard benchmark suite PSPLib, we were able to close one open instance and to improve the lower bound of about 60% of the remaining open instances. Moreover, 6 of those instances were closed. ","Explaining Time-Table-Edge-Finding Propagation for the Cumulative Resource Constraint" " In the field of ontology matching, the most systematic evaluation of matching systems is established by the Ontology Alignment Evaluation Initiative (OAEI), which is an annual campaign for evaluating ontology matching systems organized by different groups of researchers. In this paper, we report on the results of an intermediary OAEI campaign called OAEI 2011.5. The evaluations of this campaign are divided in five tracks. Three of these tracks are new or have been improved compared to previous OAEI campaigns. Overall, we evaluated 18 matching systems. We discuss lessons learned, in terms of scalability, multilingual issues and the ability do deal with real world cases from different domains. ","Evaluating Ontology Matching Systems on Large, Multilingual and Real-world Test Cases" " Today, we can find many search engines which provide us with information which is more operational in nature. None of the search engines provide domain specific information. This becomes very troublesome to a novice user who wishes to have information in a particular domain. In this paper, we have developed an ontology which can be used by a domain specific search engine. We have developed an ontology on human anatomy, which captures information regarding cardiovascular system, digestive system, skeleton and nervous system. This information can be used by people working in medical and health care domain. ",OntoAna: Domain Ontology for Human Anatomy " Various methods for lifted probabilistic inference have been proposed, but our understanding of these methods and the relationships between them is still limited, compared to their propositional counterparts. The only existing theoretical characterization of lifting is for weighted first-order model counting (WFOMC), which was shown to be complete domain-lifted for the class of 2-logvar models. This paper makes two contributions to lifted variable elimination (LVE). First, we introduce a novel inference operator called group inversion. Second, we prove that LVE augmented with this operator is complete in the same sense as WFOMC. ",Lifted Variable Elimination: A Novel Operator and Completeness Results " The Generalized Traveling Salesman Problem (GTSP) is one of the NP-hard combinatorial optimization problems. A variant of GTSP is E-GTSP where E, meaning equality, has the constraint: exactly one node from a cluster of a graph partition is visited. The main objective of the E-GTSP is to find a minimum cost tour passing through exactly one node from each cluster of an undirected graph. Agent-based approaches involving are successfully used nowadays for solving real life complex problems. The aim of the current paper is to illustrate some variants of agent-based algorithms including ant-based models with specific properties for solving E-GTSP. ","A Unifying Survey of Reinforced, Sensitive and Stigmergic Agent-Based Approaches for E-GTSP" " The current paper introduces a new parallel computing technique based on ant colony optimization for a dynamic routing problem. In the dynamic traveling salesman problem the distances between cities as travel times are no longer fixed. The new technique uses a parallel model for a problem variant that allows a slight movement of nodes within their Neighborhoods. The algorithm is tested with success on several large data sets. ",Parallel ACO with a Ring Neighborhood for Dynamic TSP " This is the Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence, which was held in Helsinki, Finland, July 9 - 12 2008. ","Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence (2008)" " This is the Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence, which was held in Vancouver, British Columbia, July 19 - 22 2007. ","Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence (2007)" " This is the Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, which was held in Edinburgh, Scotland July 26 - 29 2005. ","Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (2005)" " This is the Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, which was held in Cambridge, MA, July 13 - 16 2006. ","Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence (2006)" " This is the Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence, which was held in Banff, Canada, July 7 - 11 2004. ","Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence (2004)" " In this paper it is introduced a biobjective ant algorithm for constructing low cost routing networks. The new algorithm is called the Distributed Pharaoh System (DPS). DPS is based on AntNet algorithm. The algorithm is using Pharaoh Ant System (PAS) with an extra-exploration phase and a 'no-entry' condition in order to improve the solutions for the Low Cost Network Routing problem. Additionally it is used a cost model for overlay network construction that includes network traffic demands. The Pharaoh ants (Monomorium pharaonis) includes negative pheromones with signals concentrated at decision points where trails fork. The negative pheromones may complement positive pheromone or could help ants to escape from an unnecessarily long route to food that is being reinforced by attractive signals. Numerical experiments were made for a random 10-node network. The average node degree of the network tested was 4.0. The results are encouraging. The algorithm converges to the shortest path while converging on a low cost overlay routing network topology. ",Distributed Pharaoh System for Network Routing " The Matrix Bandwidth Minimization Problem (MBMP) seeks for a simultaneous reordering of the rows and the columns of a square matrix such that the nonzero entries are collected within a band of small width close to the main diagonal. The MBMP is a NP-complete problem, with applications in many scientific domains, linear systems, artificial intelligence, and real-life situations in industry, logistics, information recovery. The complex problems are hard to solve, that is why any attempt to improve their solutions is beneficent. Genetic algorithms and ant-based systems are Soft Computing methods used in this paper in order to solve some MBMP instances. Our approach is based on a learning agent-based model involving a local search procedure. The algorithm is compared with the classical Cuthill-McKee algorithm, and with a hybrid genetic algorithm, using several instances from Matrix Market collection. Computational experiments confirm a good performance of the proposed algorithms for the considered set of MBMP instances. On Soft Computing basis, we also propose a new theoretical Reinforcement Learning model for solving the MBMP problem. ",Soft Computing approaches on the Bandwidth Problem " Network intrusion detection systems have become a crucial issue for computer systems security infrastructures. Different methods and algorithms are developed and proposed in recent years to improve intrusion detection systems. The most important issue in current systems is that they are poor at detecting novel anomaly attacks. These kinds of attacks refer to any action that significantly deviates from the normal behaviour which is considered intrusion. This paper proposed a model to improve this problem based on data mining techniques. Apriori algorithm is used to predict novel attacks and generate real-time rules for firewall. Apriori algorithm extracts interesting correlation relationships among large set of data items. This paper illustrates how to use Apriori algorithm in intrusion detection systems to cerate a automatic firewall rules generator to detect novel anomaly attack. Apriori is the best-known algorithm to mine association rules. This is an innovative way to find association rules on large scale. ","Automatic firewall rules generator for anomaly detection systems with Apriori algorithm" " Two-dimensional bin packing problems are highly relevant combinatorial optimization problems. They find a large number of applications, for example, in the context of transportation or warehousing, and for the cutting of different materials such as glass, wood or metal. In this work we deal with the oriented two-dimensional bin packing problem under free guillotine cutting. In this specific problem a set of oriented rectangular items is given which must be packed into a minimum number of bins of equal size. The first algorithm proposed in this work is a randomized multi-start version of a constructive one-pass heuristic from the literature. Additionally we propose the use of this randomized one-pass heuristic within an evolutionary algorithm. The results of the two proposed algorithms are compared to the best approaches from the literature. In particular the evolutionary algorithm compares very favorably to current state-of-the-art approaches. The optimal solution for 4 previously unsolved instances could be found. ","On Solving the Oriented Two-Dimensional Bin Packing Problem under Free Guillotine Cutting: Exploiting the Power of Probabilistic Solution Construction" " Modern ontology debugging methods allow efficient identification and localization of faulty axioms defined by a user while developing an ontology. The ontology development process in this case is characterized by rather frequent and regular calls to a reasoner resulting in an early user awareness of modeling errors. In such a scenario an ontology usually includes only a small number of conflict sets, i.e. sets of axioms preserving the faults. This property allows efficient use of standard model-based diagnosis techniques based on the application of hitting set algorithms to a number of given conflict sets. However, in many use cases such as ontology alignment the ontologies might include many more conflict sets than in usual ontology development settings, thus making precomputation of conflict sets and consequently ontology diagnosis infeasible. In this paper we suggest a debugging approach based on a direct computation of diagnoses that omits calculation of conflict sets. Embedded in an ontology debugger, the proposed algorithm is able to identify diagnoses for an ontology which includes a large number of faults and for which application of standard diagnosis methods fails. The evaluation results show that the approach is practicable and is able to identify a fault in adequate time. ",Direct computation of diagnoses for ontology debugging " The matrices and their sub-blocks are introduced into the study of determining various extensions in the sense of Dung's theory of argumentation frameworks. It is showed that each argumentation framework has its matrix representations, and the core semantics defined by Dung can be characterized by specific sub-blocks of the matrix. Furthermore, the elementary permutations of a matrix are employed by which an efficient matrix approach for finding out all extensions under a given semantics is obtained. Different from several established approaches, such as the graph labelling algorithm, Constraint Satisfaction Problem algorithm, the matrix approach not only put the mathematic idea into the investigation for finding out various extensions, but also completely achieve the goal to compute all the extensions needed. ",A matrix approach for computing extensions of argumentation frameworks " The aim of this paper is to develop a methodology that is useful for analysing from a microeconomic perspective the incentives to entry, permanence and exit in the market for pharmaceutical generics under fuzzy conditions. In an empirical application of our proposed methodology, the potential towards permanence of labs with different characteristics has been estimated. The case we deal with is set in an open market where global players diversify into different national markets of pharmaceutical generics. Risk issues are significantly important in deterring decision makers from expanding in the generic pharmaceutical business. However, not all players are affected in the same way and/or to the same extent. Small, non-diversified generics labs are in the worse position. We have highlighted that the expected NPV and the number of generics in the portfolio of a pharmaceutical lab are important variables, but that it is also important to consider the degree of diversification. Labs with a higher potential for diversification across markets have an advantage over smaller labs. We have described a fuzzy decision support system based on the Mamdani model in order to determine the incentives for a laboratory to remain in the market both when it is stable and when it is growing. ","On firm specific characteristics of pharmaceutical generics and incentives to permanence under fuzzy conditions" " Several variants of the Constraint Satisfaction Problem have been proposed and investigated in the literature for modelling those scenarios where solutions are associated with some given costs. Within these frameworks computing an optimal solution is an NP-hard problem in general; yet, when restricted over classes of instances whose constraint interactions can be modelled via (nearly-)acyclic graphs, this problem is known to be solvable in polynomial time. In this paper, larger classes of tractable instances are singled out, by discussing solution approaches based on exploiting hypergraph acyclicity and, more generally, structural decomposition methods, such as (hyper)tree decompositions. ","Tractable Optimization Problems through Hypergraph-Based Structural Restrictions" " Efficient ontology debugging is a cornerstone for many activities in the context of the Semantic Web, especially when automatic tools produce (parts of) ontologies such as in the field of ontology matching. The best currently known interactive debugging systems rely upon some meta information in terms of fault probabilities, which can speed up the debugging procedure in the good case, but can also have negative impact on the performance in the bad case. The problem is that assessment of the meta information is only possible a-posteriori. Consequently, as long as the actual fault is unknown, there is always some risk of suboptimal interactive diagnoses discrimination. As an alternative, one might prefer to rely on a tool which pursues a no-risk strategy. In this case, however, possibly well-chosen meta information cannot be exploited, resulting again in inefficient debugging actions. In this work we present a reinforcement learning strategy that continuously adapts its behavior depending on the performance achieved and minimizes the risk of using low-quality meta information. Therefore, this method is suitable for application scenarios where reliable a-priori fault estimates are difficult to obtain. Using problematic ontologies in the field of ontology matching, we show that the proposed risk-aware query strategy outperforms both active learning approaches and no-risk strategies on average in terms of required amount of user interaction. ",RIO: Minimizing User Interaction in Ontology Debugging " In Programming by Example, a system attempts to infer a program from input and output examples, generally by searching for a composition of certain base functions. Performing a naive brute force search is infeasible for even mildly involved tasks. We note that the examples themselves often present clues as to which functions to compose, and how to rank the resulting programs. In text processing, which is our domain of interest, clues arise from simple textual features: for example, if parts of the input and output strings are permutations of one another, this suggests that sorting may be useful. We describe a system that learns the reliability of such clues, allowing for faster search and a principled ranking over programs. Experiments on a prototype of this system show that this learning scheme facilitates efficient inference on a range of text processing tasks. ",Textual Features for Programming by Example " Knowledge representation (KR) and inference mechanism are most desirable thing to make the system intelligent. System is known to an intelligent if its intelligence is equivalent to the intelligence of human being for a particular domain or general. Because of incomplete ambiguous and uncertain information the task of making intelligent system is very difficult. The objective of this paper is to present the hybrid KR technique for making the system effective & Optimistic. The requirement for (effective & optimistic) is because the system must be able to reply the answer with a confidence of some factor. This paper also presents the comparison between various hybrid KR techniques with the proposed one. ","Hybrid technique for effective knowledge representation & a comparative study" " Existing theoretical universal algorithmic intelligence models are not practically realizable. More pragmatic approach to artificial general intelligence is based on cognitive architectures, which are, however, non-universal in sense that they can construct and use models of the environment only from Turing-incomplete model spaces. We believe that the way to the real AGI consists in bridging the gap between these two approaches. This is possible if one considers cognitive functions as a ""cognitive bias"" (priors and search heuristics) that should be incorporated into the models of universal algorithmic intelligence without violating their universality. Earlier reported results suiting this approach and its overall feasibility are discussed on the example of perception, planning, knowledge representation, attention, theory of mind, language, and some others. ",Cognitive Bias for Universal Algorithmic Intelligence " The paper presents a comparison of various soft computing techniques used for filtering and enhancing speech signals. The three major techniques that fall under soft computing are neural networks, fuzzy systems and genetic algorithms. Other hybrid techniques such as neuro-fuzzy systems are also available. In general, soft computing techniques have been experimentally observed to give far superior performance as compared to non-soft computing techniques in terms of robustness and accuracy. ",Speech Signal Filters based on Soft Computing Techniques: A Comparison " This paper discusses the merits and demerits of crisp logic and fuzzy logic with respect to their applicability in intelligent response generation by a human being and by a robot. Intelligent systems must have the capability of taking decisions that are wise and handle situations intelligently. A direct relationship exists between the level of perfection in handling a situation and the level of completeness of the available knowledge or information or data required to handle the situation. The paper concludes that the use of crisp logic with complete knowledge leads to perfection in handling situations whereas fuzzy logic can handle situations imperfectly only. However, in the light of availability of incomplete knowledge fuzzy theory is more effective but may be disadvantageous as compared to crisp logic. ","Applicability of Crisp and Fuzzy Logic in Intelligent Response Generation" " For the past few decades, man has been trying to create an intelligent computer which can talk and respond like he can. The task of creating a system that can talk like a human being is the primary objective of Automatic Speech Recognition. Various Speech Recognition techniques have been developed in theory and have been applied in practice. This paper discusses the problems that have been encountered in developing Speech Recognition, the techniques that have been applied to automate the task, and a representation of the core problems of present day Speech Recognition by using Fuzzy Mathematics. ","Application of Fuzzy Mathematics to Speech-to-Text Conversion by Elimination of Paralinguistic Content" " A definition of Artificial Intelligence was proposed in [1] but this definition was not absolutely formal at least because the word ""Human"" was used. In this paper we will formalize the definition from [1]. The biggest problem in this definition was that the level of intelligence of AI is compared to the intelligence of a human being. In order to change this we will introduce some parameters to which AI will depend. One of this parameters will be the level of intelligence and we will define one AI to each level of intelligence. We assume that for some level of intelligence the respective AI will be more intelligent than a human being. Nevertheless, we cannot say which is this level because we cannot calculate its exact value. ",Formal Definition of AI " The theory of rough sets is concerned with the lower and upper approximations of objects through a binary relation on a universe. It has been applied to machine learning, knowledge discovery and data mining. The theory of matroids is a generalization of linear independence in vector spaces. It has been used in combinatorial optimization and algorithm design. In order to take advantages of both rough sets and matroids, in this paper we propose a matroidal structure of rough sets based on a serial and transitive relation on a universe. We define the family of all minimal neighborhoods of a relation on a universe, and prove it satisfy the circuit axioms of matroids when the relation is serial and transitive. In order to further study this matroidal structure, we investigate the inverse of this construction: inducing a relation by a matroid. The relationships between the upper approximation operators of rough sets based on relations and the closure operators of matroids in the above two constructions are studied. Moreover, we investigate the connections between the above two constructions. ","Matroidal structure of rough sets based on serial and transitive relations" " In this paper, we propose a new type of matroids, namely covering matroids, and investigate the connections with the second type of covering-based rough sets and some existing special matroids. Firstly, as an extension of partitions, coverings are more natural combinatorial objects and can sometimes be more efficient to deal with problems in the real world. Through extending partitions to coverings, we propose a new type of matroids called covering matroids and prove them to be an extension of partition matroids. Secondly, since some researchers have successfully applied partition matroids to classical rough sets, we study the relationships between covering matroids and covering-based rough sets which are an extension of classical rough sets. Thirdly, in matroid theory, there are many special matroids, such as transversal matroids, partition matroids, 2-circuit matroid and partition-circuit matroids. The relationships among several special matroids and covering matroids are studied. ",Covering matroid " Recently, the relationship between matroids and generalized rough sets based on relations has been studied from the viewpoint of linear independence of matrices. In this paper, we reveal more relationships by the predecessor and successor neighborhoods from relations. First, through these two neighborhoods, we propose a pair of matroids, namely predecessor relation matroid and successor relation matroid, respectively. Basic characteristics of this pair of matroids, such as dependent sets, circuits, the rank function and the closure operator, are described by the predecessor and successor neighborhoods from relations. Second, we induce a relation from a matroid through the circuits of the matroid. We prove that the induced relation is always an equivalence relation. With these two inductions, a relation induces a relation matroid, and the relation matroid induces an equivalence relation, then the connection between the original relation and the induced equivalence relation is studied. Moreover, the relationships between the upper approximation operator in generalized rough sets and the closure operator in matroids are investigated. ","Relation matroid and its relationship with generalized rough set based on relation" " Rough sets are efficient for data pre-process in data mining. Lower and upper approximations are two core concepts of rough sets. This paper studies generalized rough sets based on symmetric and transitive relations from the operator-oriented view by matroidal approaches. We firstly construct a matroidal structure of generalized rough sets based on symmetric and transitive relations, and provide an approach to study the matroid induced by a symmetric and transitive relation. Secondly, this paper establishes a close relationship between matroids and generalized rough sets. Approximation quality and roughness of generalized rough sets can be computed by the circuit of matroid theory. At last, a symmetric and transitive relation can be constructed by a matroid with some special properties. ","Matroidal structure of generalized rough sets based on symmetric and transitive relations" " At present, practical application and theoretical discussion of rough sets are two hot problems in computer science. The core concepts of rough set theory are upper and lower approximation operators based on equivalence relations. Matroid, as a branch of mathematics, is a structure that generalizes linear independence in vector spaces. Further, matroid theory borrows extensively from the terminology of linear algebra and graph theory. We can combine rough set theory with matroid theory through using rough sets to study some characteristics of matroids. In this paper, we apply rough sets to matroids through defining a family of sets which are constructed from the upper approximation operator with respect to an equivalence relation. First, we prove the family of sets satisfies the support set axioms of matroids, and then we obtain a matroid. We say the matroids induced by the equivalence relation and a type of matroid, namely support matroid, is induced. Second, through rough sets, some characteristics of matroids such as independent sets, support sets, bases, hyperplanes and closed sets are investigated. ",Some characteristics of matroids through rough sets " Neighborhood is an important concept in covering based rough sets. That under what condition neighborhoods form a partition is a meaningful issue induced by this concept. Many scholars have paid attention to this issue and presented some necessary and sufficient conditions. However, there exists one common trait among these conditions, that is they are established on the basis of all neighborhoods have been obtained. In this paper, we provide a necessary and sufficient condition directly based on the covering itself. First, we investigate the influence of that there are reducible elements in the covering on neighborhoods. Second, we propose the definition of uniform block and obtain a sufficient condition from it. Third, we propose the definitions of repeat degree and excluded number. By means of the two concepts, we obtain a necessary and sufficient condition for neighborhoods to form a partition. In a word, we have gained a deeper and more direct understanding of the essence over that neighborhoods form a partition. ","Condition for neighborhoods in covering based rough sets to form a partition" " Rough sets are efficient for data pre-processing in data mining. As a generalization of the linear independence in vector spaces, matroids provide well-established platforms for greedy algorithms. In this paper, we apply rough sets to matroids and study the contraction of the dual of the corresponding matroid. First, for an equivalence relation on a universe, a matroidal structure of the rough set is established through the lower approximation operator. Second, the dual of the matroid and its properties such as independent sets, bases and rank function are investigated. Finally, the relationships between the contraction of the dual matroid to the complement of a single point set and the contraction of the dual matroid to the complement of the equivalence class of this point are studied. ",Rough sets and matroidal contraction " It is a meaningful issue that under what condition neighborhoods induced by a covering are equal to the covering itself. A necessary and sufficient condition for this issue has been provided by some scholars. In this paper, through a counter-example, we firstly point out the necessary and sufficient condition is false. Second, we present a necessary and sufficient condition for this issue. Third, we concentrate on the inverse issue of computing neighborhoods by a covering, namely giving an arbitrary covering, whether or not there exists another covering such that the neighborhoods induced by it is just the former covering. We present a necessary and sufficient condition for this issue as well. In a word, through the study on the two fundamental issues induced by neighborhoods, we have gained a deeper understanding of the relationship between neighborhoods and the covering which induce the neighborhoods. ","Condition for neighborhoods induced by a covering to be equal to the covering itself" " Rough sets are efficient for data pre-processing in data mining. Matroids are based on linear algebra and graph theory, and have a variety of applications in many fields. Both rough sets and matroids are closely related to lattices. For a serial and transitive relation on a universe, the collection of all the regular sets of the generalized rough set is a lattice. In this paper, we use the lattice to construct a matroid and then study relationships between the lattice and the closed-set lattice of the matroid. First, the collection of all the regular sets based on a serial and transitive relation is proved to be a semimodular lattice. Then, a matroid is constructed through the height function of the semimodular lattice. Finally, we propose an approach to obtain all the closed sets of the matroid from the semimodular lattice. Borrowing from matroids, results show that lattice theory provides an interesting view to investigate rough sets. ","Closed-set lattice of regular sets based on a serial and transitive relation through matroids" " Covering is a common type of data structure and covering-based rough set theory is an efficient tool to process this data. Lattice is an important algebraic structure and used extensively in investigating some types of generalized rough sets. In this paper, we propose two family of sets and study the conditions that these two sets become some lattice structures. These two sets are consisted by the fixed point of the lower approximations of the first type and the sixth type of covering-based rough sets, respectively. These two sets are called the fixed point set of neighborhoods and the fixed point set of covering, respectively. First, for any covering, the fixed point set of neighborhoods is a complete and distributive lattice, at the same time, it is also a double p-algebra. Especially, when the neighborhood forms a partition of the universe, the fixed point set of neighborhoods is both a boolean lattice and a double Stone algebra. Second, for any covering, the fixed point set of covering is a complete lattice.When the covering is unary, the fixed point set of covering becomes a distributive lattice and a double p-algebra. a distributive lattice and a double p-algebra when the covering is unary. Especially, when the reduction of the covering forms a partition of the universe, the fixed point set of covering is both a boolean lattice and a double Stone algebra. ","Lattice structures of fixed points of the lower approximations of two types of covering-based rough sets" " Taaable is a case-based reasoning system that adapts cooking recipes to user constraints. Within it, the preparation part of recipes is formalised as a graph. This graph is a semantic representation of the sequence of instructions composing the cooking process and is used to compute the procedure adaptation, conjointly with the textual adaptation. It is composed of cooking actions and ingredients, among others, represented as vertices, and semantic relations between those, shown as arcs, and is built automatically thanks to natural language processing. The results of the automatic annotation process is often a disconnected graph, representing an incomplete annotation, or may contain errors. Therefore, a validating and correcting step is required. In this paper, we present an existing graphic tool named \kcatos, conceived for representing and editing decision trees, and show how it has been adapted and integrated in WikiTaaable, the semantic wiki in which the knowledge used by Taaable is stored. This interface provides the wiki users with a way to correct the case representation of the cooking process, improving at the same time the quality of the knowledge about cooking procedures stored in WikiTaaable. ","Semi-automatic annotation process for procedural texts: An application on cooking recipes" " Efficient Natural Evolution Strategies (eNES) is a novel alternative to conventional evolutionary algorithms, using the natural gradient to adapt the mutation distribution. Unlike previous methods based on natural gradients, eNES uses a fast algorithm to calculate the inverse of the exact Fisher information matrix, thus increasing both robustness and performance of its evolution gradient estimation, even in higher dimensions. Additional novel aspects of eNES include optimal fitness baselines and importance mixing (a procedure for updating the population with very few fitness evaluations). The algorithm yields competitive results on both unimodal and multimodal benchmarks. ",Efficient Natural Evolution Strategies " This paper assumes the hypothesis that human learning is perception based, and consequently, the learning process and perceptions should not be represented and investigated independently or modeled in different simulation spaces. In order to keep the analogy between the artificial and human learning, the former is assumed here as being based on the artificial perception. Hence, instead of choosing to apply or develop a Computational Theory of (human) Perceptions, we choose to mirror the human perceptions in a numeric (computational) space as artificial perceptions and to analyze the interdependence between artificial learning and artificial perception in the same numeric space, using one of the simplest tools of Artificial Intelligence and Soft Computing, namely the perceptrons. As practical applications, we choose to work around two examples: Optical Character Recognition and Iris Recognition. In both cases a simple Turing test shows that artificial perceptions of the difference between two characters and between two irides are fuzzy, whereas the corresponding human perceptions are, in fact, crisp. ","Examples of Artificial Perceptions in Optical Character Recognition and Iris Recognition" " In E-learning, there is still the problem of knowing how to ensure an individualized and continuous learner's follow-up during learning process, indeed among the numerous tools proposed, very few systems concentrate on a real time learner's follow-up. Our work in this field develops the design and implementation of a Multi-Agents System Based on Dynamic Case Based Reasoning which can initiate learning and provide an individualized follow-up of learner. When interacting with the platform, every learner leaves his/her traces in the machine. These traces are stored in a basis under the form of scenarios which enrich collective past experience. The system monitors, compares and analyses these traces to keep a constant intelligent watch and therefore detect difficulties hindering progress and/or avoid possible dropping out. The system can support any learning subject. The success of a case-based reasoning system depends critically on the performance of the retrieval step used and, more specifically, on similarity measure used to retrieve scenarios that are similar to the course of the learner (traces in progress). We propose a complementary similarity measure, named Inverse Longest Common Sub-Sequence (ILCSS). To help and guide the learner, the system is equipped with combined virtual and human tutors. ","Multi-Agents Dynamic Case Based Reasoning and The Inverse Longest Common Sub-Sequence And Individualized Follow-up of Learners in The CEHL" " Covering-based rough set theory is a useful tool to deal with inexact, uncertain or vague knowledge in information systems. Topology, one of the most important subjects in mathematics, provides mathematical tools and interesting topics in studying information systems and rough sets. In this paper, we present the topological characterizations to three types of covering approximation operators. First, we study the properties of topology induced by the sixth type of covering lower approximation operator. Second, some topological characterizations to the covering lower approximation operator to be an interior operator are established. We find that the topologies induced by this operator and by the sixth type of covering lower approximation operator are the same. Third, we study the conditions which make the first type of covering upper approximation operator be a closure operator, and find that the topology induced by the operator is the same as the topology induced by the fifth type of covering upper approximation operator. Forth, the conditions of the second type of covering upper approximation operator to be a closure operator and the properties of topology induced by it are established. Finally, these three topologies space are compared. In a word, topology provides a useful method to study the covering-based rough sets. ","Topological characterizations to three types of covering approximation operators" " Covering-based rough set theory is a useful tool to deal with inexact, uncertain or vague knowledge in information systems. Geometric lattice has widely used in diverse fields, especially search algorithm design which plays important role in covering reductions. In this paper, we construct four geometric lattice structures of covering-based rough sets through matroids, and compare their relationships. First, a geometric lattice structure of covering-based rough sets is established through the transversal matroid induced by the covering, and its characteristics including atoms, modular elements and modular pairs are studied. We also construct a one-to-one correspondence between this type of geometric lattices and transversal matroids in the context of covering-based rough sets. Second, sufficient and necessary conditions for three types of covering upper approximation operators to be closure operators of matroids are presented. We exhibit three types of matroids through closure axioms, and then obtain three geometric lattice structures of covering-based rough sets. Third, these four geometric lattice structures are compared. Some core concepts such as reducible elements in covering-based rough sets are investigated with geometric lattices. In a word, this work points out an interesting view, namely geometric lattice, to study covering-based rough sets. ","Geometric lattice structure of covering-based rough sets through matroids" " The measurement error with normal distribution is universal in applications. Generally, smaller measurement error requires better instrument and higher test cost. In decision making based on attribute values of objects, we shall select an attribute subset with appropriate measurement error to minimize the total test cost. Recently, error-range-based covering rough set with uniform distribution error was proposed to investigate this issue. However, the measurement errors satisfy normal distribution instead of uniform distribution which is rather simple for most applications. In this paper, we introduce normal distribution measurement errors to covering-based rough set model, and deal with test-cost-sensitive attribute reduction problem in this new model. The major contributions of this paper are four-fold. First, we build a new data model based on normal distribution measurement errors. With the new data model, the error range is an ellipse in a two-dimension space. Second, the covering-based rough set with normal distribution measurement errors is constructed through the ""3-sigma"" rule. Third, the test-cost-sensitive attribute reduction problem is redefined on this covering-based rough set. Fourth, a heuristic algorithm is proposed to deal with this problem. The algorithm is tested on ten UCI (University of California - Irvine) datasets. The experimental results show that the algorithm is more effective and efficient than the existing one. This study is a step toward realistic applications of cost-sensitive learning. ","Test-cost-sensitive attribute reduction of data with normal distribution measurement errors" " Recently, in order to broad the application and theoretical areas of rough sets and matroids, some authors have combined them from many different viewpoints, such as circuits, rank function, spanning sets and so on. In this paper, we connect the second type of covering-based rough sets and matroids from the view of closure operators. On one hand, we establish a closure system through the fixed point family of the second type of covering lower approximation operator, and then construct a closure operator. For a covering of a universe, the closure operator is a closure one of a matroid if and only if the reduct of the covering is a partition of the universe. On the other hand, we investigate the sufficient and necessary condition that the second type of covering upper approximation operation is a closure one of a matroid. ","Relationship between the second type of covering-based rough set and matroid via closure operator" " The questions which we will consider here are ""What is AI?"" and ""How can we make AI?"". Here we will present the definition of AI in terms of multi-agent systems. This means that here you will not find a new answer to the question ""What is AI?"", but an old answer in a new form. This new form of the definition of AI is of interest for the theory of multi-agent systems because it gives us better understanding of this theory. More important is that this work will help us answer the second question. We want to make a program which is capable of constructing a model of its environment. Every multi-agent model is equivalent to a single-agent model but multi-agent models are more natural and accordingly more easily discoverable. ",The Definition of AI in Terms of Multi Agent Systems " In this paper we offer a formal definition of Artificial Intelligence and this directly gives us an algorithm for construction of this object. Really, this algorithm is useless due to the combinatory explosion. The main innovation in our definition is that it does not include the knowledge as a part of the intelligence. So according to our definition a newly born baby also is an Intellect. Here we differs with Turing's definition which suggests that an Intellect is a person with knowledge gained through the years. ",A Definition of Artificial Intelligence " Answer Set Programming (ASP) is a well-known problem solving approach based on nonmonotonic logic programs and efficient solvers. To enable access to external information, HEX-programs extend programs with external atoms, which allow for a bidirectional communication between the logic program and external sources of computation (e.g., description logic reasoners and Web resources). Current solvers evaluate HEX-programs by a translation to ASP itself, in which values of external atoms are guessed and verified after the ordinary answer set computation. This elegant approach does not scale with the number of external accesses in general, in particular in presence of nondeterminism (which is instrumental for ASP). In this paper, we present a novel, native algorithm for evaluating HEX-programs which uses learning techniques. In particular, we extend conflict-driven ASP solving techniques, which prevent the solver from running into the same conflict again, from ordinary to HEX-programs. We show how to gain additional knowledge from external source evaluations and how to use it in a conflict-driven algorithm. We first target the uninformed case, i.e., when we have no extra information on external sources, and then extend our approach to the case where additional meta-information is available. Experiments show that learning from external sources can significantly decrease both the runtime and the number of considered candidate compatible sets. ",Conflict-driven ASP Solving with External Sources " In order to build AI we have to create a program which copes well in an arbitrary world. In this paper we will restrict our attention on one concrete world, which represents the game Tick-Tack-Toe. This world is a very simple one but it is sufficiently complicated for our task because most people cannot manage with it. The main difficulty in this world is that the player cannot see the entire internal state of the world so he has to build a model in order to understand the world. The model which we will offer will consist of final automata and first order formulas. ",AI in arbitrary world " Supply Chain coordination has become a critical success factor for Supply Chain management (SCM) and effectively improving the performance of organizations in various industries. Companies are increasingly located at the intersection of one or more corporate networks which are designated by ""Supply Chain"". Managing this chain is mainly based on an 'information sharing' and redeployment activities between the various links that comprise it. Several attempts have been made by industrialists and researchers to educate policymakers about the gains to be made by the implementation of cooperative relationships. The approach presented in this paper here is among the works that aim to propose solutions related to information systems distributed Supply Chains to enable the different actors of the chain to improve their performance. We propose in particular solutions that focus on cooperation between actors in the Supply Chain. ",An Agent-based framework for cooperation in Supply Chain " Local Optima Networks (LONs) have been recently proposed as an alternative model of combinatorial fitness landscapes. The model compresses the information given by the whole search space into a smaller mathematical object that is the graph having as vertices the local optima and as edges the possible weighted transitions between them. A new set of metrics can be derived from this model that capture the distribution and connectivity of the local optima in the underlying configuration space. This paper departs from the descriptive analysis of local optima networks, and actively studies the correlation between network features and the performance of a local search heuristic. The NK family of landscapes and the Iterated Local Search metaheuristic are considered. With a statistically-sound approach based on multiple linear regression, it is shown that some LONs' features strongly influence and can even partly predict the performance of a heuristic search algorithm. This study validates the expressive power of LONs as a model of combinatorial fitness landscapes. ",Local optima networks and the performance of iterated local search " We propose an approach to lifted approximate inference for first-order probabilistic models, such as Markov logic networks. It is based on performing exact lifted inference in a simplified first-order model, which is found by relaxing first-order constraints, and then compensating for the relaxation. These simplified models can be incrementally improved by carefully recovering constraints that have been relaxed, also at the first-order level. This leads to a spectrum of approximations, with lifted belief propagation on one end, and exact lifted inference on the other. We discuss how relaxation, compensation, and recovery can be performed, all at the firstorder level, and show empirically that our approach substantially improves on the approximations of both propositional solvers and lifted belief propagation. ","Lifted Relax, Compensate and then Recover: From Approximate to Exact Lifted Probabilistic Inference" " The MPE (Most Probable Explanation) query plays an important role in probabilistic inference. MPE solution algorithms for probabilistic relational models essentially adapt existing belief assessment method, replacing summation with maximization. But the rich structure and symmetries captured by relational models together with the properties of the maximization operator offer an opportunity for additional simplification with potentially significant computational ramifications. Specifically, these models often have groups of variables that define symmetric distributions over some population of formulas. The maximizing choice for different elements of this group is the same. If we can realize this ahead of time, we can significantly reduce the size of the model by eliminating a potentially significant portion of random variables. This paper defines the notion of uniformly assigned and partially uniformly assigned sets of variables, shows how one can recognize these sets efficiently, and how the model can be greatly simplified once we recognize them, with little computational effort. We demonstrate the effectiveness of these ideas empirically on a number of models. ",Exploiting Uniform Assignments in First-Order MPE " This paper provides some new guidance in the construction of region graphs for Generalized Belief Propagation (GBP). We connect the problem of choosing the outer regions of a LoopStructured Region Graph (SRG) to that of finding a fundamental cycle basis of the corresponding Markov network. We also define a new class of tree-robust Loop-SRG for which GBP on any induced (spanning) tree of the Markov network, obtained by setting to zero the off-tree interactions, is exact. This class of SRG is then mapped to an equivalent class of tree-robust cycle bases on the Markov network. We show that a treerobust cycle basis can be identified by proving that for every subset of cycles, the graph obtained from the edges that participate in a single cycle only, is multiply connected. Using this we identify two classes of tree-robust cycle bases: planar cycle bases and ""star"" cycle bases. In experiments we show that tree-robustness can be successfully exploited as a design principle to improve the accuracy and convergence of GBP. ",Generalized Belief Propagation on Tree Robust Structured Region Graphs " We consider the problem of sampling from solutions defined by a set of hard constraints on a combinatorial space. We propose a new sampling technique that, while enforcing a uniform exploration of the search space, leverages the reasoning power of a systematic constraint solver in a black-box scheme. We present a series of challenging domains, such as energy barriers and highly asymmetric spaces, that reveal the difficulties introduced by hard constraints. We demonstrate that standard approaches such as Simulated Annealing and Gibbs Sampling are greatly affected, while our new technique can overcome many of these difficulties. Finally, we show that our sampling scheme naturally defines a new approximate model counting technique, which we empirically show to be very accurate on a range of benchmark problems. ",Uniform Solution Sampling Using a Constraint Solver As an Oracle " Stochastic Shortest Path (SSP) MDPs is a problem class widely studied in AI, especially in probabilistic planning. They describe a wide range of scenarios but make the restrictive assumption that the goal is reachable from any state, i.e., that dead-end states do not exist. Because of this, SSPs are unable to model various scenarios that may have catastrophic events (e.g., an airplane possibly crashing if it flies into a storm). Even though MDP algorithms have been used for solving problems with dead ends, a principled theory of SSP extensions that would allow dead ends, including theoretically sound algorithms for solving such MDPs, has been lacking. In this paper, we propose three new MDP classes that admit dead ends under increasingly weaker assumptions. We present Value Iteration-based as well as the more efficient heuristic search algorithms for optimally solving each class, and explore theoretical relationships between these classes. We also conduct a preliminary empirical study comparing the performance of our algorithms on different MDP classes, especially on scenarios with unavoidable dead ends. ",A Theory of Goal-Oriented MDPs with Dead Ends " We develop several algorithms taking advantage of two common approaches for bounding MPE queries in graphical models: minibucket elimination and message-passing updates for linear programming relaxations. Both methods are quite similar, and offer useful perspectives for the other; our hybrid approaches attempt to balance the advantages of each. We demonstrate the power of our hybrid algorithms through extensive empirical evaluation. Most notably, a Branch and Bound search guided by the heuristic function calculated by one of our new algorithms has recently won first place in the PASCAL2 inference challenge. ",Join-graph based cost-shifting schemes " We consider the problem of selecting a subset of alternatives given noisy evaluations of the relative strength of different alternatives. We wish to select a k-subset (for a given k) that provides a maximum likelihood estimate for one of several objectives, e.g., containing the strongest alternative. Although this problem is NP-hard, we show that when the noise level is sufficiently high, intuitive methods provide the optimal solution. We thus generalize classical results about singling out one alternative and identifying the hidden ranking of alternatives by strength. Extensive experiments show that our methods perform well in practical settings. ",A Maximum Likelihood Approach For Selecting Sets of Alternatives " We study the problem of complexity estimation in the context of parallelizing an advanced Branch and Bound-type algorithm over graphical models. The algorithm's pruning power makes load balancing, one crucial element of every distributed system, very challenging. We propose using a statistical regression model to identify and tackle disproportionally complex parallel subproblems, the cause of load imbalance, ahead of time. The proposed model is evaluated and analyzed on various levels and shown to yield robust predictions. We then demonstrate its effectiveness for load balancing in practice. ","A Case Study in Complexity Estimation: Towards Parallel Branch-and-Bound over Graphical Models" " Influence diagrams allow for intuitive and yet precise description of complex situations involving decision making under uncertainty. Unfortunately, most of the problems described by influence diagrams are hard to solve. In this paper we discuss the complexity of approximately solving influence diagrams. We do not assume no-forgetting or regularity, which makes the class of problems we address very broad. Remarkably, we show that when both the tree-width and the cardinality of the variables are bounded the problem admits a fully polynomial-time approximation scheme. ",The Complexity of Approximately Solving Influence Diagrams " Variational inference algorithms such as belief propagation have had tremendous impact on our ability to learn and use graphical models, and give many insights for developing or understanding exact and approximate inference. However, variational approaches have not been widely adoped for decision making in graphical models, often formulated through influence diagrams and including both centralized and decentralized (or multi-agent) decisions. In this work, we present a general variational framework for solving structured cooperative decision-making problems, use it to propose several belief propagation-like algorithms, and analyze them both theoretically and empirically. ",Belief Propagation for Structured Decision Making " We describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can still be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on e-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user tradeoffs, which also greatly improves the efficiency. ",Multi-objective Influence Diagrams " Planning in partially observable Markov decision processes (POMDPs) remains a challenging topic in the artificial intelligence community, in spite of recent impressive progress in approximation techniques. Previous research has indicated that online planning approaches are promising in handling large-scale POMDP domains efficiently as they make decisions ""on demand"" instead of proactively for the entire state space. We present a Factored Hybrid Heuristic Online Planning (FHHOP) algorithm for large POMDPs. FHHOP gets its power by combining a novel hybrid heuristic search strategy with a recently developed factored state representation. On several benchmark problems, FHHOP substantially outperformed state-of-the-art online heuristic search approaches in terms of both scalability and quality. ","FHHOP: A Factored Hybrid Heuristic Online Planning Algorithm for Large POMDPs" " We introduce a new cluster-cumulant expansion (CCE) based on the fixed points of iterative belief propagation (IBP). This expansion is similar in spirit to the loop-series (LS) recently introduced in [1]. However, in contrast to the latter, the CCE enjoys the following important qualities: 1) it is defined for arbitrary state spaces 2) it is easily extended to fixed points of generalized belief propagation (GBP), 3) disconnected groups of variables will not contribute to the CCE and 4) the accuracy of the expansion empirically improves upon that of the LS. The CCE is based on the same M\""obius transform as the Kikuchi approximation, but unlike GBP does not require storing the beliefs of the GBP-clusters nor does it suffer from convergence issues during belief updating. ",A Cluster-Cumulant Expansion at the Fixed Points of Belief Propagation " Rough set theory is a useful tool to deal with uncertain, granular and incomplete knowledge in information systems. And it is based on equivalence relations or partitions. Matroid theory is a structure that generalizes linear independence in vector spaces, and has a variety of applications in many fields. In this paper, we propose a new type of matroids, namely, partition-circuit matroids, which are induced by partitions. Firstly, a partition satisfies circuit axioms in matroid theory, then it can induce a matroid which is called a partition-circuit matroid. A partition and an equivalence relation on the same universe are one-to-one corresponding, then some characteristics of partition-circuit matroids are studied through rough sets. Secondly, similar to the upper approximation number which is proposed by Wang and Zhu, we define the lower approximation number. Some characteristics of partition-circuit matroids and the dual matroids of them are investigated through the lower approximation number and the upper approximation number. ",Characteristic of partition-circuit matroid through approximation number " In this work we investigate the systems that implements algorithms for the planning problem in Artificial Intelligence, called planners, with especial attention to the planners based on the plan graph. We analyze the problem of comparing the performance of the different algorithms and we propose an environment for the development and analysis of planners. ",Ambiente de Planejamento Ip\^e " For the exploration of large state spaces, symbolic search using binary decision diagrams (BDDs) can save huge amounts of memory and computation time. State sets are represented and modified by accessing and manipulating their characteristic functions. BDD partitioning is used to compute the image as the disjunction of smaller subimages. In this paper, we propose a novel BDD partitioning option. The partitioning is lexicographical in the binary representation of the states contained in the set that is represented by a BDD and uniform with respect to the number of states represented. The motivation of controlling the state set sizes in the partitioning is to eventually bridge the gap between explicit and symbolic search. Let n be the size of the binary state vector. We propose an O(n) ranking and unranking scheme that supports negated edges and operates on top of precomputed satcount values. For the uniform split of a BDD, we then use unranking to provide paths along which we partition the BDDs. In a shared BDD representation the efforts are O(n). The algorithms are fully integrated in the CUDD library and evaluated in strongly solving general game playing benchmarks. ",Lex-Partitioning: A New Option for BDD Search " In this paper we present the results of unstructured data clustering in this case a textual data from Reuters 21578 corpus with a new biomimetic approach using immune system. Before experimenting our immune system, we digitalized textual data by the n-grams approach. The novelty lies on hybridization of n-grams and immune systems for clustering. The experimental results show that the recommended ideas are promising and prove that this method can solve the text clustering problem. ","A Biomimetic Approach Based on Immune Systems for Classification of Unstructured Data" " With the increased use of ontologies in semantically-enabled applications, the issue of debugging defects in ontologies has become increasingly important. These defects can lead to wrong or incomplete results for the applications. Debugging consists of the phases of detection and repairing. In this paper we focus on the repairing phase of a particular kind of defects, i.e. the missing relations in the is-a hierarchy. Previous work has dealt with the case of taxonomies. In this work we extend the scope to deal with ALC ontologies that can be represented using acyclic terminologies. We present algorithms and discuss a system. ","Get my pizza right: Repairing missing is-a relations in ALC ontologies (extended version)" " The Algorithm Selection Problem is concerned with selecting the best algorithm to solve a given problem on a case-by-case basis. It has become especially relevant in the last decade, as researchers are increasingly investigating how to identify the most suitable existing algorithm for solving a problem instead of developing new algorithms. This survey presents an overview of this work focusing on the contributions made in the area of combinatorial search problems, where Algorithm Selection techniques have achieved significant performance improvements. We unify and organise the vast literature according to criteria that determine Algorithm Selection systems in practice. The comprehensive classification of approaches identifies and analyses the different directions from which Algorithm Selection has been approached. This paper contrasts and compares different methods for solving the problem as well as ways of using these solutions. It closes by identifying directions of current and future research. ",Algorithm Selection for Combinatorial Search Problems: A Survey " Rough sets were proposed to deal with the vagueness and incompleteness of knowledge in information systems. There are may optimization issues in this field such as attribute reduction. Matroids generalized from matrices are widely used in optimization. Therefore, it is necessary to connect matroids with rough sets. In this paper, we take field into consideration and introduce matrix to study rough sets through vector matroids. First, a matrix representation of an equivalence relation is proposed, and then a matroidal structure of rough sets over a field is presented by the matrix. Second, the properties of the matroidal structure including circuits, bases and so on are studied through two special matrix solution spaces, especially null space. Third, over a binary field, we construct an equivalence relation from matrix null space, and establish an algebra isomorphism from the collection of equivalence relations to the collection of sets, which any member is a family of the minimal non-empty sets that are supports of members of null space of a binary dependence matrix. In a word, matrix provides a new viewpoint to study rough sets. ",Matrix approach to rough sets through vector matroids over a field " There are few knowledge representation (KR) techniques available for efficiently representing knowledge. However, with the increase in complexity, better methods are needed. Some researchers came up with hybrid mechanisms by combining two or more methods. In an effort to construct an intelligent computer system, a primary consideration is to represent large amounts of knowledge in a way that allows effective use and efficiently organizing information to facilitate making the recommended inferences. There are merits and demerits of combinations, and standardized method of KR is needed. In this paper, various hybrid schemes of KR were explored at length and details presented. ",Hybrid Systems for Knowledge Representation in Artificial Intelligence " We describe an inference task in which a set of timestamped event observations must be clustered into an unknown number of temporal sequences with independent and varying rates of observations. Various existing approaches to multi-object tracking assume a fixed number of sources and/or a fixed observation rate; we develop an approach to inferring structure in timestamped data produced by a mixture of an unknown and varying number of similar Markov renewal processes, plus independent clutter noise. The inference simultaneously distinguishes signal from noise as well as clustering signal observations into separate source streams. We illustrate the technique via a synthetic experiment as well as an experiment to track a mixture of singing birds. ",Segregating event streams and noise with a Markov renewal process model " Decision tree is an effective classification approach in data mining and machine learning. In applications, test costs and misclassification costs should be considered while inducing decision trees. Recently, some cost-sensitive learning algorithms based on ID3 such as CS-ID3, IDX, \lambda-ID3 have been proposed to deal with the issue. These algorithms deal with only symbolic data. In this paper, we develop a decision tree algorithm inspired by C4.5 for numeric data. There are two major issues for our algorithm. First, we develop the test cost weighted information gain ratio as the heuristic information. According to this heuristic information, our algorithm is to pick the attribute that provides more gain ratio and costs less for each selection. Second, we design a post-pruning strategy through considering the tradeoff between test costs and misclassification costs of the generated decision tree. In this way, the total cost is reduced. Experimental results indicate that (1) our algorithm is stable and effective; (2) the post-pruning technique reduces the total cost significantly; (3) the competition strategy is effective to obtain a cost-sensitive decision tree with low cost. ",Cost-sensitive C4.5 with post-pruning and competition " Case Based Reasoning (CBR) is an intelligent way of thinking based on experience and capitalization of already solved cases (source cases) to find a solution to a new problem (target case). Retrieval phase consists on identifying source cases that are similar to the target case. This phase may lead to erroneous results if the existing knowledge imperfections are not taken into account. This work presents a novel solution based on Fuzzy logic techniques and adaptation measures which aggregate weighted similarities to improve the retrieval results. To confirm the efficiency of our solution, we have applied it to the industrial diagnosis domain. The obtained results are more efficient results than those obtained by applying typical measures. ",A Logic and Adaptive Approach for Efficient Diagnosis Systems using CBR " This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games' state (not only player's orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions' mixtures components ",A Dataset for StarCraft AI \& an Example of Armies Clustering " Many cognitive systems deploy multiple, closed, individually consistent models which can represent interpretations of the present state of the world, moments in the past, possible futures or alternate versions of reality. While they appear under different names, these structures can be grouped under the general term of worlds. The Xapagy architecture is a story-oriented cognitive system which relies exclusively on the autobiographical memory implemented as a raw collection of events organized into world-type structures called {\em scenes}. The system performs reasoning by shadowing current events with events from the autobiography. The shadows are then extrapolated into headless shadows corresponding to predictions, hidden events or inferred relations. ","Shadows and headless shadows: a worlds-based, autobiographical approach to reasoning" " This paper argues that the problem of identity is a critical challenge in agents which are able to reason about stories. The Xapagy architecture has been built from scratch to perform narrative reasoning and relies on a somewhat unusual approach to represent instances and identity. We illustrate the approach by a representation of the story of Little Red Riding Hood in the architecture, with a focus on the problem of identity raised by the narrative. ",Modeling problems of identity in Little Red Riding Hood " The Xapagy architecture is a story-oriented cognitive system which relies exclusively on the autobiographical memory implemented as a raw collection of events. Reasoning is performed by shadowing current events with events from the autobiography. The shadows are then extrapolated into headless shadows (HLSs). In a story following mood, HLSs can be used to track the level of surprise of the agent, to infer hidden actions or relations between the participants, and to summarize ongoing events. In recall mood, the HLSs can be used to create new stories ranging from exact recall to free-form confabulation. ","Shadows and Headless Shadows: an Autobiographical Approach to Narrative Reasoning" " Computational thinking is a new problem soling method named for its extensive use of computer science techniques. It synthesizes critical thinking and existing knowledge and applies them in solving complex technological problems. The term was coined by J. Wing, but the relationship between computational and critical thinking, the two modes of thiking in solving problems, has not been yet learly established. This paper aims at shedding some light into this relationship. We also present two classroom experiments performed recently at the Graduate Technological Educational Institute of Patras in Greece. The results of these experiments give a strong indication that the use of computers as a tool for problem solving enchances the students' abilities in solving real world problems involving mathematical modelling. This is also crossed by earlier findings of other researchers for the problem solving process in general (not only for mathematical problems). ",Problem Solving and Computational Thinking in a Learning Environment " Traffic regulation must be respected by all vehicles, either human- or computer- driven. However, extreme traffic situations might exhibit practical cases in which a vehicle should safely and reasonably relax traffic regulation, e.g., in order not to be indefinitely blocked and to keep circulating. In this paper, we propose a high-level representation of an automated vehicle, other vehicles and their environment, which can assist drivers in taking such ""illegal"" but practical relaxation decisions. This high-level representation (an ontology) includes topological knowledge and inference rules, in order to compute the next high-level motion an automated vehicle should take, as assistance to a driver. Results on practical cases are presented. ","An ontology-based approach to relax traffic regulation for autonomous vehicle assistance" " Soft Constraint Logic Programming is a natural and flexible declarative programming formalism, which allows to model and solve real-life problems involving constraints of different types. In this paper, after providing a slightly more general and elegant presentation of the framework, we show how we can apply it to the e-mobility problem of coordinating electric vehicles in order to overcome both energetic and temporal constraints and so to reduce their running cost. In particular, we focus on the journey optimization sub-problem, considering sequences of trips from a user's appointment to another one. Solutions provide the best alternatives in terms of time and energy consumption, including route sequences and possible charging events. ","Soft Constraint Logic Programming for Electric Vehicle Travel Optimization" " We look at the problem of revising fuzzy belief bases, i.e., belief base revision in which both formulas in the base as well as revision-input formulas can come attached with varying truth-degrees. Working within a very general framework for fuzzy logic which is able to capture a variety of types of inference under uncertainty, such as truth-functional fuzzy logics and certain types of probabilistic inference, we show how the idea of rational change from 'crisp' base revision, as embodied by the idea of partial meet revision, can be faithfully extended to revising fuzzy belief bases. We present and axiomatise an operation of partial meet fuzzy revision and illustrate how the operation works in several important special instances of the framework. ",On revising fuzzy belief bases " WA qualitative probabilistic network models the probabilistic relationships between its variables by means of signs. Non-monotonic influences have associated an ambiguous sign. These ambiguous signs typically lead to uninformative results upon inference. A non-monotonic influence can, however, be associated with a, more informative, sign that indicates its effect in the current state of the network. To capture this effect, we introduce the concept of situational sign. Furthermore, if the network converts to a state in which all variables that provoke the non-monotonicity have been observed, a non-monotonic influence reduces to a monotonic influence. We study the persistence and propagation of situational signs upon inference and give a method to establish the sign of a reduced influence. ",Upgrading Ambiguous Signs in QPNs " Dependability modeling and evaluation is aimed at investigating that a system performs its function correctly in time. A usual way to achieve a high reliability, is to design redundant systems that contain several replicas of the same subsystem or component. State space methods for dependability analysis may suffer of the state space explosion problem in such a kind of situation. Combinatorial models, on the other hand, require the simplified assumption of statistical independence; however, in case of redundant systems, this does not guarantee a reduced number of modeled elements. In order to provide a more compact system representation, parametric system modeling has been investigated in the literature, in such a way that a set of replicas of a given subsystem is parameterized so that only one representative instance is explicitly included. While modeling aspects can be suitably addressed by these approaches, analytical tools working on parametric characterizations are often more difficult to be defined and the standard approach is to 'unfold' the parametric model, in order to exploit standard analysis algorithms working at the unfolded 'ground' level. Moreover, parameterized combinatorial methods still require the statistical independence assumption. In the present paper we consider the formalism of Parametric Fault Tree (PFT) and we show how it can be related to Probabilistic Horn Abduction (PHA). Since PHA is a framework where both modeling and analysis can be performed in a restricted first-order language, we aim at showing that converting a PFT into a PHA knowledge base will allow an approach to dependability analysis directly exploiting parametric representation. We will show that classical qualitative and quantitative dependability measures can be characterized within PHA. Furthermore, additional modeling aspects (such as noisy gates and local dependencies) as well as additional reliability measures (such as posterior probability analysis) can be naturally addressed by this conversion. A simple example of a multi-processor system with several replicated units is used to illustrate the approach. ",Parametric Dependability Analysis through Probabilistic Horn Abduction " This paper introduces new methodology to triangulate dynamic Bayesian networks (DBNs) and dynamic graphical models (DGMs). While most methods to triangulate such networks use some form of constrained elimination scheme based on properties of the underlying directed graph, we find it useful to view triangulation and elimination using properties only of the resulting undirected graph, obtained after the moralization step. We first briefly introduce the Graphical model toolkit (GMTK) and its notion of dynamic graphical models, one that slightly extends the standard notion of a DBN. We next introduce the 'boundary algorithm', a method to find the best boundary between partitions in a dynamic model. We find that using this algorithm, the notions of forward- and backward-interface become moot - namely, the size and fill-in of the best forward- and backward- interface are identical. Moreover, we observe that finding a good partition boundary allows for constrained elimination orders (and therefore graph triangulations) that are not possible using standard slice-by-slice constrained eliminations. More interestingly, with certain boundaries it is possible to obtain constrained elimination schemes that lie outside the space of possible triangulations using only unconstrained elimination. Lastly, we report triangulation results on invented graphs, standard DBNs from the literature, novel DBNs used in speech recognition research systems, and also random graphs. Using a number of different triangulation quality measures (max clique size, state-space, etc.), we find that with our boundary algorithm the triangulation quality can dramatically improve. ",On Triangulating Dynamic Graphical Models " The paper studies empirically the time-space trade-off between sampling and inference in a sl cutset sampling algorithm. The algorithm samples over a subset of nodes in a Bayesian network and applies exact inference over the rest. Consequently, while the size of the sampling space decreases, requiring less samples for convergence, the time for generating each single sample increases. The w-cutset sampling selects a sampling set such that the induced-width of the network when the sampling set is observed is bounded by w, thus requiring inference whose complexity is exponential in w. In this paper, we investigate performance of w-cutset sampling over a range of w values and measure the accuracy of w-cutset sampling as a function of w. Our experiments demonstrate that the cutset sampling idea is quite powerful showing that an optimal balance between inference and sampling benefits substantially from restricting the cutset size, even at the cost of more complex inference. ",An Empirical Study of w-Cutset Sampling for Bayesian Networks " In a standard possibilistic logic, prioritized information are encoded by means of weighted knowledge base. This paper proposes an extension of possibilistic logic for dealing with partially ordered information. We Show that all basic notions of standard possibilitic logic (sumbsumption, syntactic and semantic inference, etc.) have natural couterparts when dealing with partially ordered information. We also propose an algorithm which computes possibilistic conclusions of a partial knowledge base of a partially ordered knowlege base. ",A possibilistic handling of partially ordered information " Backtracking search is a powerful algorithmic paradigm that can be used to solve many problems. It is in a certain sense the dual of variable elimination; but on many problems, e.g., SAT, it is vastly superior to variable elimination in practice. Motivated by this we investigate the application of backtracking search to the problem of Bayesian inference (Bayes). We show that natural generalizations of known techniques allow backtracking search to achieve performance guarantees similar to standard algorithms for Bayes, and that there exist problems on which backtracking can in fact do much better. We also demonstrate that these ideas can be applied to implement a Bayesian inference engine whose performance is competitive with standard algorithms. Since backtracking search can very naturally take advantage of context specific structure, the potential exists for performance superior to standard algorithms on many problems. ",Value Elimination: Bayesian Inference via Backtracking Search " Recursive Conditioning (RC) was introduced recently as the first any-space algorithm for inference in Bayesian networks which can trade time for space by varying the size of its cache at the increment needed to store a floating point number. Under full caching, RC has an asymptotic time and space complexity which is comparable to mainstream algorithms based on variable elimination and clustering (exponential in the network treewidth and linear in its size). We show two main results about RC in this paper. First, we show that its actual space requirements under full caching are much more modest than those needed by mainstream methods and study the implications of this finding. Second, we show that RC can effectively deal with determinism in Bayesian networks by employing standard logical techniques, such as unit resolution, allowing a significant reduction in its time requirements in certain cases. We illustrate our results using a number of benchmark networks, including the very challenging ones that arise in genetic linkage analysis. ",New Advances in Inference by Recursive Conditioning " Most methods of exact probability propagation in Bayesian networks do not carry out the inference directly over the network, but over a secondary structure known as a junction tree or a join tree (JT). The process of obtaining a JT is usually termed {sl compilation}. As compilation is usually viewed as a whole process; each time the network is modified, a new compilation process has to be carried out. The possibility of reusing an already existing JT, in order to obtain the new one regarding only the modifications in the network has received only little attention in the literature. In this paper we present a method for incremental compilation of a Bayesian network, following the classical scheme in which triangulation plays the key role. In order to perform incremental compilation we propose to recompile only those parts of the JT which can have been affected by the networks modifications. To do so, we exploit the technique OF maximal prime subgraph decomposition in determining the minimal subgraph(s) that have to be recompiled, and thereby the minimal subtree(s) of the JT that should be replaced by new subtree(s).We focus on structural modifications : addition and deletion of links and variables. ",Incremental Compilation of Bayesian networks " This paper is directed towards combining Pearl's structural-model approach to causal reasoning with high-level formalisms for reasoning about actions. More precisely, we present a combination of Pearl's structural-model approach with Poole's independent choice logic. We show how probabilistic theories in the independent choice logic can be mapped to probabilistic causal models. This mapping provides the independent choice logic with appealing concepts of causality and explanation from the structural-model approach. We illustrate this along Halpern and Pearl's sophisticated notions of actual cause, explanation, and partial explanation. This mapping also adds first-order modeling capabilities and explicit actions to the structural-model approach. ",Structure-Based Causes and Explanations in the Independent Choice Logic " Symbolic representations have been used successfully in off-line planning algorithms for Markov decision processes. We show that they can also improve the performance of on-line planners. In addition to reducing computation time, symbolic generalization can reduce the amount of costly real-world interactions required for convergence. We introduce Symbolic Real-Time Dynamic Programming (or sRTDP), an extension of RTDP. After each step of on-line interaction with an environment, sRTDP uses symbolic model-checking techniques to generalizes its experience by updating a group of states rather than a single state. We examine two heuristic approaches to dynamic grouping of states and show that they accelerate the planning process significantly in terms of both CPU time and the number of steps of interaction with the environment. ",Symbolic Generalization for On-line Planning " We present the language {m P}{cal C}+ for probabilistic reasoning about actions, which is a generalization of the action language {cal C}+ that allows to deal with probabilistic as well as nondeterministic effects of actions. We define a formal semantics of {m P}{cal C}+ in terms of probabilistic transitions between sets of states. Using a concept of a history and its belief state, we then show how several important problems in reasoning about actions can be concisely formulated in our formalism. ",Probabilistic Reasoning about Actions in Nonmonotonic Causal Theories " In Non - ergodic belief networks the posterior belief OF many queries given evidence may become zero.The paper shows that WHEN belief propagation IS applied iteratively OVER arbitrary networks(the so called, iterative OR loopy belief propagation(IBP)) it IS identical TO an arc - consistency algorithm relative TO zero - belief queries(namely assessing zero posterior probabilities). This implies that zero - belief conclusions derived BY belief propagation converge AND are sound.More importantly it suggests that the inference power OF IBP IS AS strong AND AS weak, AS that OF arc - consistency.This allows the synthesis OF belief networks FOR which belief propagation IS useless ON one hand, AND focuses the investigation OF classes OF belief network FOR which belief propagation may be zero - complete.Finally, ALL the above conclusions apply also TO Generalized belief propagation algorithms that extend loopy belief propagation AND allow a crisper understanding OF their power. ",A Simple Insight into Iterative Belief Propagation's Success " Pearls concept OF a d - connecting path IS one OF the foundations OF the modern theory OF graphical models : the absence OF a d - connecting path IN a DAG indicates that conditional independence will hold IN ANY distribution factorising according TO that graph. IN this paper we show that IN singly - connected Gaussian DAGs it IS possible TO USE the form OF a d - connection TO obtain qualitative information about the strength OF conditional dependence.More precisely, the squared partial correlations BETWEEN two given variables, conditioned ON different subsets may be partially ordered BY examining the relationship BETWEEN the d - connecting path AND the SET OF variables conditioned upon. ","Using the structure of d-connecting paths as a qualitative measure of the strength of dependence" " In this paper, we introduce a method for approximating the solution to inference and optimization tasks in uncertain and deterministic reasoning. Such tasks are in general intractable for exact algorithms because of the large number of dependency relationships in their structure. Our method effectively maps such a dense problem to a sparser one which is in some sense ""closest"". Exact methods can be run on the sparser problem to derive bounds on the original answer, which can be quite sharp. We present empirical results demonstrating that our method works well on the tasks of belief inference and finding the probability of the most probable explanation in belief networks, and finding the cost of the solution that violates the smallest number of constraints in constraint satisfaction problems. On one large CPCS network, for example, we were able to calculate upper and lower bounds on the conditional probability of a variable, given evidence, that were almost identical in the average case. ","Approximate Decomposition: A Method for Bounding and Estimating Probabilistic and Deterministic Queries" " Real-world distributed systems and networks are often unreliable and subject to random failures of its components. Such a stochastic behavior affects adversely the complexity of optimization tasks performed routinely upon such systems, in particular, various resource allocation tasks. In this work we investigate and develop Monte Carlo solutions for a class of two-stage optimization problems in stochastic networks in which the expected value of resource allocations before and after stochastic failures needs to be optimized. The limitation of these problems is that their exact solutions are exponential in the number of unreliable network components: thus, exact methods do not scale-up well to large networks often seen in practice. We first prove that Monte Carlo optimization methods can overcome the exponential bottleneck of exact methods. Next we support our theoretical findings on resource allocation experiments and show a very good scale-up potential of the new methods to large stochastic networks. ","Monte-Carlo optimizations for resource allocation problems in stochastic network systems" " This paper examines a number of solution methods for decision processes with non-Markovian rewards (NMRDPs). They all exploit a temporal logic specification of the reward function to automatically translate the NMRDP into an equivalent Markov decision process (MDP) amenable to well-known MDP solution methods. They differ however in the representation of the target MDP and the class of MDP solution methods to which they are suited. As a result, they adopt different temporal logics and different translations. Unfortunately, no implementation of these methods nor experimental let alone comparative results have ever been reported. This paper is the first step towards filling this gap. We describe an integrated system for solving NMRDPs which implements these methods and several variants under a common interface; we use it to compare the various approaches and identify the problem features favoring one over the other. ","Implementation and Comparison of Solution Methods for Decision Processes with Non-Markovian Rewards" " This paper studies decision making for Walley's partially consonant belief functions (pcb). In a pcb, the set of foci are partitioned. Within each partition, the foci are nested. The pcb class includes probability functions and possibility functions as extreme cases. Unlike earlier proposals for a decision theory with belief functions, we employ an axiomatic approach. We adopt an axiom system similar in spirit to von Neumann - Morgenstern's linear utility theory for a preference relation on pcb lotteries. We prove a representation theorem for this relation. Utility for a pcb lottery is a combination of linear utility for probabilistic lottery and binary utility for possibilistic lottery. ",Decision Making with Partially Consonant Belief Functions " The two most popular types of graphical model are directed models (Bayesian networks) and undirected models (Markov random fields, or MRFs). Directed and undirected models offer complementary properties in model construction, expressing conditional independencies, expressing arbitrary factorizations of joint distributions, and formulating message-passing inference algorithms. We show that the strengths of these two representations can be combined in a single type of graphical model called a 'factor graph'. Every Bayesian network or MRF can be easily converted to a factor graph that expresses the same conditional independencies, expresses the same factorization of the joint distribution, and can be used for probabilistic inference through application of a single, simple message-passing algorithm. In contrast to chain graphs, where message-passing is implemented on a hypergraph, message-passing can be directly implemented on the factor graph. We describe a modified 'Bayes-ball' algorithm for establishing conditional independence in factor graphs, and we show that factor graphs form a strict superset of Bayesian networks and MRFs. In particular, we give an example of a commonly-used 'mixture of experts' model fragment, whose independencies cannot be represented in a Bayesian network or an MRF, but can be represented in a factor graph. We finish by giving examples of real-world problems that are not well suited to representation in Bayesian networks and MRFs, but are well-suited to representation in factor graphs. ","Extending Factor Graphs so as to Unify Directed and Undirected Graphical Models" " This paper is devoted to the search of robust solutions in state space graphs when costs depend on scenarios. We first present axiomatic requirements for preference compatibility with the intuitive idea of robustness.This leads us to propose the Lorenz dominance rule as a basis for robustness analysis. Then, after presenting complexity results about the determination of robust solutions, we propose a new sophistication of A* specially designed to determine the set of robust paths in a state space graph. The behavior of the algorithm is illustrated on a small example. Finally, an axiomatic justification of the refinement of robustness by an OWA criterion is provided. ","An Axiomatic Approach to Robustness in Search Problems with Multiple Scenarios" " MAP is the problem of finding a most probable instantiation of a set of variables in a Bayesian network given some evidence. Unlike computing posterior probabilities, or MPE (a special case of MAP), the time and space complexity of structural solutions for MAP are not only exponential in the network treewidth, but in a larger parameter known as the ""constrained"" treewidth. In practice, this means that computing MAP can be orders of magnitude more expensive than computing posterior probabilities or MPE. This paper introduces a new, simple upper bound on the probability of a MAP solution, which admits a tradeoff between the bound quality and the time needed to compute it. The bound is shown to be generally much tighter than those of other methods of comparable complexity. We use this proposed upper bound to develop a branch-and-bound search algorithm for solving MAP exactly. Experimental results demonstrate that the search algorithm is able to solve many problems that are far beyond the reach of any structure-based method for MAP. For example, we show that the proposed algorithm can compute MAP exactly and efficiently for some networks whose constrained treewidth is more than 40. ",Solving MAP Exactly using Systematic Search " The aim of this research is to develop a reasoning under uncertainty strategy in the context of the Fuzzy Inductive Reasoning (FIR) methodology. FIR emerged from the General Systems Problem Solving developed by G. Klir. It is a data driven methodology based on systems behavior rather than on structural knowledge. It is a very useful tool for both the modeling and the prediction of those systems for which no previous structural knowledge is available. FIR reasoning is based on pattern rules synthesized from the available data. The size of the pattern rule base can be very large making the prediction process quite difficult. In order to reduce the size of the pattern rule base, it is possible to automatically extract classical Sugeno fuzzy rules starting from the set of pattern rules. The Sugeno rule base preserves pattern rules knowledge as much as possible. In this process some information is lost but robustness is considerably increased. In the forecasting process either the pattern rule base or the Sugeno fuzzy rule base can be used. The first option is desirable when the computational resources make it possible to deal with the overall pattern rule base or when the extracted fuzzy rules are not accurate enough due to uncertainty associated to the original data. In the second option, the prediction process is done by means of the classical Sugeno inference system. If the amount of uncertainty associated to the data is small, the predictions obtained using the Sugeno fuzzy rule base will be very accurate. In this paper a mixed pattern/fuzzy rules strategy is proposed to deal with uncertainty in such a way that the best of both perspectives is used. Areas in the data space with a higher level of uncertainty are identified by means of the so-called error models. The prediction process in these areas makes use of a mixed pattern/fuzzy rules scheme, whereas areas identified with a lower level of uncertainty only use the Sugeno fuzzy rule base. The proposed strategy is applied to a real biomedical system, i.e., the central nervous system control of the cardiovascular system. ",Dealing with uncertainty in fuzzy inductive reasoning methodology " For a given problem, the optimal Markov policy can be considerred as a conditional or contingent plan containing a (potentially large) number of branches. Unfortunately, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. For example, it may be that plans must later undergo more detailed simulation to verify correctness and safety, or that they must be simple enough to be understood and analyzed by humans. As a result, it may be necessary to limit consideration to plans with only a small number of branches. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning (OKP). It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a Partially Observable Markov Decision Process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases. ",Optimal Limited Contingency Planning " The paper continues the study of partitioning based inference of heuristics for search in the context of solving the Most Probable Explanation task in Bayesian Networks. We compare two systematic Branch and Bound search algorithms, BBBT (for which the heuristic information is constructed during search and allows dynamic variable/value ordering) and its predecessor BBMB (for which the heuristic information is pre-compiled), against a number of popular local search algorithms for the MPE problem. We show empirically that, when viewed as approximation schemes, BBBT/BBMB are superior to all of these best known SLS algorithms, especially when the domain sizes increase beyond 2. This is in contrast with the performance of SLS vs. systematic search on CSP/SAT problems, where SLS often significantly outperforms systematic algorithms. As far as we know, BBBT/BBMB are currently the best performing algorithms for solving the MPE task. ",Systematic vs. Non-systematic Algorithms for Solving the MPE Task " Precision achieved by stochastic sampling algorithms for Bayesian networks typically deteriorates in face of extremely unlikely evidence. To address this problem, we propose the Evidence Pre-propagation Importance Sampling algorithm (EPIS-BN), an importance sampling algorithm that computes an approximate importance function by the heuristic methods: loopy belief Propagation and e-cutoff. We tested the performance of e-cutoff on three large real Bayesian networks: ANDES, CPCS, and PATHFINDER. We observed that on each of these networks the EPIS-BN algorithm gives us a considerable improvement over the current state of the art algorithm, the AIS-BN algorithm. In addition, it avoids the costly learning stage of the AIS-BN algorithm. ",An Importance Sampling Algorithm Based on Evidence Pre-propagation " In this paper we examine the problem of inference in Bayesian Networks with discrete random variables that have very large or even unbounded domains. For example, in a domain where we are trying to identify a person, we may have variables that have as domains, the set of all names, the set of all postal codes, or the set of all credit card numbers. We cannot just have big tables of the conditional probabilities, but need compact representations. We provide an inference algorithm, based on variable elimination, for belief networks containing both large domain and normal discrete random variables. We use intensional (i.e., in terms of procedures) and extensional (in terms of listing the elements) representations of conditional probabilities and of the intermediate factors. ",Efficient Inference in Large Discrete Domains " We present CLP(BN), a novel approach that aims at expressing Bayesian networks through the constraint logic programming framework. Arguably, an important limitation of traditional Bayesian networks is that they are propositional, and thus cannot represent relations between multiple similar objects in multiple contexts. Several researchers have thus proposed first-order languages to describe such networks. Namely, one very successful example of this approach are the Probabilistic Relational Models (PRMs), that combine Bayesian networks with relational database technology. The key difficulty that we had to address when designing CLP(cal{BN}) is that logic based representations use ground terms to denote objects. With probabilitic data, we need to be able to uniquely represent an object whose value we are not sure about. We use {sl Skolem functions} as unique new symbols that uniquely represent objects with unknown value. The semantics of CLP(cal{BN}) programs then naturally follow from the general framework of constraint logic programming, as applied to a specific domain where we have probabilistic data. This paper introduces and defines CLP(cal{BN}), and it describes an implementation and initial experiments. The paper also shows how CLP(cal{BN}) relates to Probabilistic Relational Models (PRMs), Ngo and Haddawys Probabilistic Logic Programs, AND Kersting AND De Raedts Bayesian Logic Programs. ",CLP(BN): Constraint Logic Programming for Probabilistic Knowledge " We use princiles of fuzzy logic to develop a general model representing several processes in a system's operation characterized by a degree of vagueness and/or uncertainy. Further, we introduce three altenative measures of a fuzzy system's effectiveness connected to the above model. An applcation is also developed for the Mathematical Modelling process illustrating our results. ",A Study on Fuzzy Systems " In their nature configuration problems are combinatorial (optimization) problems. In order to find a configuration a solver has to instantiate a number of components of a some type and each of these components can be used in a relation defined for a type. Therefore, many solutions of a configuration problem have symmetric ones which can be obtained by replacing some component of a solution by another one of the same type. These symmetric solutions decrease performance of optimization algorithms because of two reasons: a) they satisfy all requirements and cannot be pruned out from the search space; and b) existence of symmetric optimal solutions does not allow to prove the optimum in a feasible time. ",Study: Symmetry breaking for ASP " Results are presented on the performance of Adaptive Neuro-Fuzzy Inference system (ANFIS) for wind velocity forecasts in the Isthmus of Tehuantepec region in the state of Oaxaca, Mexico. The data bank was provided by the meteorological station located at the University of Isthmus, Tehuantepec campus, and this data bank covers the period from 2008 to 2011. Three data models were constructed to carry out 16, 24 and 48 hours forecasts using the following variables: wind velocity, temperature, barometric pressure, and date. The performance measure for the three models is the mean standard error (MSE). In this work, performance analysis in short-term prediction is presented, because it is essential in order to define an adequate wind speed model for eolian parks, where a right planning provide economic benefits. ",Performance Analysis of ANFIS in short term Wind Speed Prediction " ConArg is a Constraint Programming-based tool that can be used to model and solve different problems related to Abstract Argumentation Frameworks (AFs). To implement this tool we have used JaCoP, a Java library that provides the user with a Finite Domain Constraint Programming paradigm. ConArg is able to randomly generate networks with small-world properties in order to find conflict-free, admissible, complete, stable grounded, preferred, semi-stable, stage and ideal extensions on such interaction graphs. We present the main features of ConArg and we report the performance in time, showing also a comparison with ASPARTIX [1], a similar tool using Answer Set Programming. The use of techniques for constraint solving can tackle the complexity of the problems presented in [2]. Moreover we suggest semiring-based soft constraints as a mean to parametrically represent and solve Weighted Argumentation Frameworks: different kinds of preference levels related to attacks, e.g., a score representing a ""fuzziness"", a ""cost"" or a probability, can be represented by choosing different instantiation of the semiring algebraic structure. The basic idea is to provide a common computational and quantitative framework. ","ConArg: a Tool to Solve (Weighted) Abstract Argumentation Frameworks with (Soft) Constraints" " The Semantic Web ontology language OWL 2 DL comes with a variety of language features that enable sophisticated and practically useful modeling. However, the use of these features has been severely restricted in order to retain decidability of the language. For example, OWL 2 DL does not allow a property to be both transitive and asymmetric, which would be desirable, e.g., for representing an ancestor relation. In this paper, we argue that the so-called global restrictions of OWL 2 DL preclude many useful forms of modeling, by providing a catalog of basic modeling patterns that would be available in OWL 2 DL if the global restrictions were discarded. We then report on the results of evaluating several state-of-the-art OWL 2 DL reasoners on problems that use combinations of features in a way that the global restrictions are violated. The systems turn out to rely heavily on the global restrictions and are thus largely incapable of coping with the modeling patterns. Next we show how off-the-shelf first-order logic theorem proving technology can be used to perform reasoning in the OWL 2 direct semantics, the semantics that underlies OWL 2 DL, but without requiring the global restrictions. Applying a naive proof-of-concept implementation of this approach to the test problems was successful in all cases. Based on our observations, we make suggestions for future lines of research on expressive description logic-style OWL reasoning. ",Modeling in OWL 2 without Restrictions " All standard AI planners to-date can only handle a single objective, and the only way for them to take into account multiple objectives is by aggregation of the objectives. Furthermore, and in deep contrast with the single objective case, there exists no benchmark problems on which to test the algorithms for multi-objective planning. Divide and Evolve (DAE) is an evolutionary planner that won the (single-objective) deterministic temporal satisficing track in the last International Planning Competition. Even though it uses intensively the classical (and hence single-objective) planner YAHSP, it is possible to turn DAE-YAHSP into a multi-objective evolutionary planner. A tunable benchmark suite for multi-objective planning is first proposed, and the performances of several variants of multi-objective DAE-YAHSP are compared on different instances of this benchmark, hopefully paving the road to further multi-objective competitions in AI planning. ",Multi-Objective AI Planning: Evaluating DAE-YAHSP on a Tunable Benchmark " We investigate the concept of symmetry and its role in problem solving. This paper first defines precisely the elements that constitute a ""problem"" and its ""solution,"" and gives several examples to illustrate these definitions. Given precise definitions of problems, it is relatively straightforward to construct a search process for finding solutions. Finally this paper attempts to exploit the concept of symmetry in improving problem solving. ",Improving problem solving by exploiting the concept of symmetry " We have a lot of relation to the encoding and the Theory of Information, when considering thinking. This is a natural process and, at once, the complex thing we investigate. This always was a challenge - to understand how our mind works, and we are trying to find some universal models for this. A lot of ways have been considered so far, but we are looking for Something, we seek for approaches. And the goal is to find a consistent, noncontradictory view, which should at once be enough flexible in any dimensions to allow to represent various kinds of processes and environments, matters of different nature and diverse objects. Developing of such a model is the destination of this article. ","Irrespective Priority-Based Regular Properties of High-Intensity Virtual Environments" " Neuroevolution has yet to scale up to complex reinforcement learning tasks that require large networks. Networks with many inputs (e.g. raw video) imply a very high dimensional search space if encoded directly. Indirect methods use a more compact genotype representation that is transformed into networks of potentially arbitrary size. In this paper, we present an indirect method where networks are encoded by a set of Fourier coefficients which are transformed into network weight matrices via an inverse Fourier-type transform. Because there often exist network solutions whose weight matrices contain regularity (i.e. adjacent weights are correlated), the number of coefficients required to represent these networks in the frequency domain is much smaller than the number of weights (in the same way that natural images can be compressed by ignore high-frequency components). This ""compressed"" encoding is compared to the direct approach where search is conducted in the weight space on the high-dimensional octopus arm task. The results show that representing networks in the frequency domain can reduce the search-space dimensionality by as much as two orders of magnitude, both accelerating convergence and yielding more general solutions. ",A Frequency-Domain Encoding for Neuroevolution " We propose AD3, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs based on the alternating directions method of multipliers. Like dual decomposition algorithms, AD3 uses worker nodes to iteratively solve local subproblems and a controller node to combine these local solutions into a global update. The key characteristic of AD3 is that each local subproblem has a quadratic regularizer, leading to a faster consensus than subgradient-based dual decomposition, both theoretically and in practice. We provide closed-form solutions for these AD3 subproblems for binary pairwise factors and factors imposing first-order logic constraints. For arbitrary factors (large or combinatorial), we introduce an active set method which requires only an oracle for computing a local MAP configuration, making AD3 applicable to a wide range of problems. Experiments on synthetic and realworld problems show that AD3 compares favorably with the state-of-the-art. ",Alternating Directions Dual Decomposition " Travel sharing, i.e., the problem of finding parts of routes which can be shared by several travellers with different points of departure and destinations, is a complex multiagent problem that requires taking into account individual agents' preferences to come up with mutually acceptable joint plans. In this paper, we apply state-of-the-art planning techniques to real-world public transportation data to evaluate the feasibility of multiagent planning techniques in this domain. The potential application value of improving travel sharing technology has great application value due to its ability to reduce the environmental impact of travelling while providing benefits to travellers at the same time. We propose a three-phase algorithm that utilises performant single-agent planners to find individual plans in a simplified domain first, then merges them using a best-response planner which ensures resulting solutions are individually rational, and then maps the resulting plan onto the full temporal planning domain to schedule actual journeys. The evaluation of our algorithm on real-world, multi-modal public transportation data for the United Kingdom shows linear scalability both in the scenario size and in the number of agents, where trade-offs have to be made between total cost improvement, the percentage of feasible timetables identified for journeys, and the prolongation of these journeys. Our system constitutes the first implementation of strategic multiagent planning algorithms in large-scale domains and provides insights into the engineering process of translating general domain-independent multiagent planning algorithms to real-world applications. ","Applying Strategic Multiagent Planning to Real-World Travel Sharing Problems" " Robust optimization is one of the fundamental approaches to deal with uncertainty in combinatorial optimization. This paper considers the robust spanning tree problem with interval data, which arises in a variety of telecommunication applications. It proposes a constraint satisfaction approach using a combinatorial lower bound, a pruning component that removes infeasible and suboptimal edges, as well as a search strategy exploring the most uncertain edges first. The resulting algorithm is shown to produce very dramatic improvements over the mathematical programming approach of Yaman et al. and to enlarge considerably the class of problems amenable to effective solutions ","A constraint satisfaction approach to the robust spanning tree problem with interval data" " The problem of learning Markov equivalence classes of Bayesian network structures may be solved by searching for the maximum of a scoring metric in a space of these classes. This paper deals with the definition and analysis of one such search space. We use a theoretically motivated neighbourhood, the inclusion boundary, and represent equivalence classes by essential graphs. We show that this search space is connected and that the score of the neighbours can be evaluated incrementally. We devise a practical way of building this neighbourhood for an essential graph that is purely graphical and does not explicitely refer to the underlying independences. We find that its size can be intractable, depending on the complexity of the essential graph of the equivalence class. The emphasis is put on the potential use of this space with greedy hill -climbing search ","On the Construction of the Inclusion Boundary Neighbourhood for Markov Equivalence Classes of Bayesian Network Structures" " Recently, it has been emphasized that the possibility theory framework allows us to distinguish between i) what is possible because it is not ruled out by the available knowledge, and ii) what is possible for sure. This distinction may be useful when representing knowledge, for modelling values which are not impossible because they are consistent with the available knowledge on the one hand, and values guaranteed to be possible because reported from observations on the other hand. It is also of interest when expressing preferences, to point out values which are positively desired among those which are not rejected. This distinction can be encoded by two types of constraints expressed in terms of necessity measures and in terms of guaranteed possibility functions, which induce a pair of possibility distributions at the semantic level. A consistency condition should ensure that what is claimed to be guaranteed as possible is indeed not impossible. The present paper investigates the representation of this bipolar view, including the case when it is stated by means of conditional measures, or by means of comparative context-dependent constraints. The interest of this bipolar framework, which has been recently stressed for expressing preferences, is also pointed out in the representation of diagnostic knowledge. ",Bipolar Possibilistic Representations " We develop a qualitative theory of Markov Decision Processes (MDPs) and Partially Observable MDPs that can be used to model sequential decision making tasks when only qualitative information is available. Our approach is based upon an order-of-magnitude approximation of both probabilities and utilities, similar to epsilon-semantics. The result is a qualitative theory that has close ties with the standard maximum-expected-utility theory and is amenable to general planning techniques. ",Qualitative MDPs and POMDPs: An Order-Of-Magnitude Approximation " The ability to make decisions and to assess potential courses of action is a corner-stone of many AI applications, and usually this requires explicit information about the decision-maker s preferences. IN many applications, preference elicitation IS a serious bottleneck.The USER either does NOT have the time, the knowledge, OR the expert support required TO specify complex multi - attribute utility functions. IN such cases, a method that IS based ON intuitive, yet expressive, preference statements IS required. IN this paper we suggest the USE OF TCP - nets, an enhancement OF CP - nets, AS a tool FOR representing, AND reasoning about qualitative preference statements.We present AND motivate this framework, define its semantics, AND show how it can be used TO perform constrained optimization. ",Introducing Variable Importance Tradeoffs into CP-Nets " We outline a class of problems, typical of Mars rover operations, that are problematic for current methods of planning under uncertainty. The existing methods fail because they suffer from one or more of the following limitations: 1) they rely on very simple models of actions and time, 2) they assume that uncertainty is manifested in discrete action outcomes, 3) they are only practical for very small problems. For many real world problems, these assumptions fail to hold. In particular, when planning the activities for a Mars rover, none of the above assumptions is valid: 1) actions can be concurrent and have differing durations, 2) there is uncertainty concerning action durations and consumption of continuous resources like power, and 3) typical daily plans involve on the order of a hundred actions. This class of problems may be of particular interest to the UAI community because both classical and decision-theoretic planning techniques may be useful in solving it. We describe the rover problem, discuss previous work on planning under uncertainty, and present a detailed, but very small, example illustrating some of the difficulties of finding good plans. ","Planning under Continuous Time and Resource Uncertainty: A Challenge for AI" " This paper concerns the assessment of direct causal effects from a combination of: (i) non-experimental data, and (ii) qualitative domain knowledge. Domain knowledge is encoded in the form of a directed acyclic graph (DAG), in which all interactions are assumed linear, and some variables are presumed to be unobserved. We provide a generalization of the well-known method of Instrumental Variables, which allows its application to models with few conditional independeces. ",Generalized Instrumental Variables " In this paper, we derive optimality results for greedy Bayesian-network search algorithms that perform single-edge modifications at each step and use asymptotically consistent scoring criteria. Our results extend those of Meek (1997) and Chickering (2002), who demonstrate that in the limit of large datasets, if the generative distribution is perfect with respect to a DAG defined over the observable variables, such search algorithms will identify this optimal (i.e. generative) DAG model. We relax their assumption about the generative distribution, and assume only that this distribution satisfies the {em composition property} over the observable variables, which is a more realistic assumption for real domains. Under this assumption, we guarantee that the search algorithms identify an {em inclusion-optimal} model; that is, a model that (1) contains the generative distribution and (2) has no sub-model that contains this distribution. In addition, we show that the composition property is guaranteed to hold whenever the dependence relationships in the generative distribution can be characterized by paths between singleton elements in some generative graphical model (e.g. a DAG, a chain graph, or a Markov network) even when the generative model includes unobserved variables, and even when the observed data is subject to selection bias. ",Finding Optimal Bayesian Networks " The paper presents an iterative version of join-tree clustering that applies the message passing of join-tree clustering algorithm to join-graphs rather than to join-trees, iteratively. It is inspired by the success of Pearl's belief propagation algorithm as an iterative approximation scheme on one hand, and by a recently introduced mini-clustering i. success as an anytime approximation method, on the other. The proposed Iterative Join-graph Propagation IJGP belongs to the class of generalized belief propagation methods, recently proposed using analogy with algorithms in statistical physics. Empirical evaluation of this approach on a number of problem classes demonstrates that even the most time-efficient variant is almost always superior to IBP and MC i, and is sometimes more accurate by as much as several orders of magnitude. ",Iterative Join-Graph Propagation " In this paper, we continue our research on the algorithmic aspects of Halpern and Pearl's causes and explanations in the structural-model approach. To this end, we present new characterizations of weak causes for certain classes of causal models, which show that under suitable restrictions deciding causes and explanations is tractable. To our knowledge, these are the first explicit tractability results for the structural-model approach. ","Causes and Explanations in the Structural-Model Approach: Tractable Cases" " We formulate necessary and sufficient conditions for an arbitrary discrete probability distribution to factor according to an undirected graphical model, or a log-linear model, or other more general exponential models. This result generalizes the well known Hammersley-Clifford Theorem. ",Factorization of Discrete Probability Distributions " This paper presents a decision-theoretic approach to statistical inference that satisfies the likelihood principle (LP) without using prior information. Unlike the Bayesian approach, which also satisfies LP, we do not assume knowledge of the prior distribution of the unknown parameter. With respect to information that can be obtained from an experiment, our solution is more efficient than Wald's minimax solution.However, with respect to information assumed to be known before the experiment, our solution demands less input than the Bayesian solution. ","Statistical Decisions Using Likelihood Information Without Prior Probabilities" " We present a principled and efficient planning algorithm for collaborative multiagent dynamical systems. All computation, during both the planning and the execution phases, is distributed among the agents; each agent only needs to model and plan for a small part of the system. Each of these local subsystems is small, but once they are combined they can represent an exponentially larger problem. The subsystems are connected through a subsystem hierarchy. Coordination and communication between the agents is not imposed, but derived directly from the structure of this hierarchy. A globally consistent plan is achieved by a message passing algorithm, where messages correspond to natural local reward functions and are computed by local linear programs; another message passing algorithm allows us to execute the resulting policy. When two portions of the hierarchy share the same structure, our algorithm can reuse plans and messages to speed up computation. ",Distributed Planning in Hierarchical Factored MDPs " We describe expectation propagation for approximate inference in dynamic Bayesian networks as a natural extension of Pearl s exact belief propagation.Expectation propagation IS a greedy algorithm, converges IN many practical cases, but NOT always.We derive a DOUBLE - loop algorithm, guaranteed TO converge TO a local minimum OF a Bethe free energy.Furthermore, we show that stable fixed points OF (damped) expectation propagation correspond TO local minima OF this free energy, but that the converse need NOT be the CASE .We illustrate the algorithms BY applying them TO switching linear dynamical systems AND discuss implications FOR approximate inference IN general Bayesian networks. ","Expectation Propogation for approximate inference in dynamic Bayesian networks" " We extend the language of influence diagrams to cope with decision scenarios where the order of decisions and observations is not determined. As the ordering of decisions is dependent on the evidence, a step-strategy of such a scenario is a sequence of dependent choices of the next action. A strategy is a step-strategy together with selection functions for decision actions. The structure of a step-strategy can be represented as a DAG with nodes labeled with action variables. We introduce the concept of GS-DAG: a DAG incorporating an optimal step-strategy for any instantiation. We give a method for constructing GS-DAGs, and we show how to use a GS-DAG for determining an optimal strategy. Finally we discuss how analysis of relevant past can be used to reduce the size of the GS-DAG. ",Unconstrained Influence Diagrams " We introduce a new Bayesian network (BN) scoring metric called the Global Uniform (GU) metric. This metric is based on a particular type of default parameter prior. Such priors may be useful when a BN developer is not willing or able to specify domain-specific parameter priors. The GU parameter prior specifies that every prior joint probability distribution P consistent with a BN structure S is considered to be equally likely. Distribution P is consistent with S if P includes just the set of independence relations defined by S. We show that the GU metric addresses some undesirable behavior of the BDeu and K2 Bayesian network scoring metrics, which also use particular forms of default parameter priors. A closed form formula for computing GU for special classes of BNs is derived. Efficiently computing GU for an arbitrary BN remains an open problem. ","A Bayesian Network Scoring Metric That Is Based On Globally Uniform Parameter Priors" " This paper investigates value function approximation in the context of zero-sum Markov games, which can be viewed as a generalization of the Markov decision process (MDP) framework to the two-agent case. We generalize error bounds from MDPs to Markov games and describe generalizations of reinforcement learning algorithms to Markov games. We present a generalization of the optimal stopping problem to a two-player simultaneous move Markov game. For this special problem, we provide stronger bounds and can guarantee convergence for LSTD and temporal difference learning with linear value function approximation. We demonstrate the viability of value function approximation for Markov games by using the Least squares policy iteration (LSPI) algorithm to learn good policies for a soccer domain and a flow control problem. ",Value Function Approximation in Zero-Sum Markov Games " The Reverse Water Gas Shift system (RWGS) is a complex physical system designed to produce oxygen from the carbon dioxide atmosphere on Mars. If sent to Mars, it would operate without human supervision, thus requiring a reliable automated system for monitoring and control. The RWGS presents many challenges typical of real-world systems, including: noisy and biased sensors, nonlinear behavior, effects that are manifested over different time granularities, and unobservability of many important quantities. In this paper we model the RWGS using a hybrid (discrete/continuous) Dynamic Bayesian Network (DBN), where the state at each time slice contains 33 discrete and 184 continuous variables. We show how the system state can be tracked using probabilistic inference over the model. We discuss how to deal with the various challenges presented by the RWGS, providing a suite of techniques that are likely to be useful in a wide range of applications. In particular, we describe a general framework for dealing with nonlinear behavior using numerical integration techniques, extending the successful Unscented Filter. We also show how to use a fixed-point computation to deal with effects that develop at different time scales, specifically rapid changes occurring during slowly changing processes. We test our model using real data collected from the RWGS, demonstrating the feasibility of hybrid DBNs for monitoring complex real-world physical systems. ",Monitoring a Complez Physical System using a Hybrid Dynamic Bayes Net " We propose a formal treatment of scenarios in the context of a dialectical argumentation formalism for qualitative reasoning about uncertain propositions. Our formalism extends prior work in which arguments for and against uncertain propositions were presented and compared in interaction spaces called Agoras. We now define the notion of a scenario in this framework and use it to define a set of qualitative uncertainty labels for propositions across a collection of scenarios. This work is intended to lead to a formal theory of scenarios and scenario analysis. ",Formalizing Scenario Analysis " This paper is about searching the combinatorial space of contingency tables during the inner loop of a nonlinear statistical optimization. Examples of this operation in various data analytic communities include searching for nonlinear combinations of attributes that contribute significantly to a regression (Statistics), searching for items to include in a decision list (machine learning) and association rule hunting (Data Mining). This paper investigates a new, efficient approach to this class of problems, called RADSEARCH (Real-valued All-Dimensions-tree Search). RADSEARCH finds the global optimum, and this gives us the opportunity to empirically evaluate the question: apart from algorithmic elegance what does this attention to optimality buy us? We compare RADSEARCH with other recent successful search algorithms such as CN2, PRIM, APriori, OPUS and DenseMiner. Finally, we introduce RADREG, a new regression algorithm for learning real-valued outputs based on RADSEARCHing for high-order interactions. ","Real-valued All-Dimensions search: Low-overhead rapid searching over subsets of attributes" " Exact monitoring in dynamic Bayesian networks is intractable, so approximate algorithms are necessary. This paper presents a new family of approximate monitoring algorithms that combine the best qualities of the particle filtering and Boyen-Koller methods. Our algorithms maintain an approximate representation the belief state in the form of sets of factored particles, that correspond to samples of clusters of state variables. Empirical results show that our algorithms outperform both ordinary particle filtering and the Boyen-Koller algorithm on large systems. ",Factored Particles for Scalable Monitoring " In this paper we present a language for finite state continuous time Bayesian networks (CTBNs), which describe structured stochastic processes that evolve over continuous time. The state of the system is decomposed into a set of local variables whose values change over time. The dynamics of the system are described by specifying the behavior of each local variable as a function of its parents in a directed (possibly cyclic) graph. The model specifies, at any given point in time, the distribution over two aspects: when a local variable changes its value and the next value it takes. These distributions are determined by the variable s CURRENT value AND the CURRENT VALUES OF its parents IN the graph.More formally, each variable IS modelled AS a finite state continuous time Markov process whose transition intensities are functions OF its parents.We present a probabilistic semantics FOR the language IN terms OF the generative model a CTBN defines OVER sequences OF events.We list types OF queries one might ask OF a CTBN, discuss the conceptual AND computational difficulties associated WITH exact inference, AND provide an algorithm FOR approximate inference which takes advantage OF the structure within the process. ",Continuous Time Bayesian Networks " MAP is the problem of finding a most probable instantiation of a set of nvariables in a Bayesian network, given some evidence. MAP appears to be a significantly harder problem than the related problems of computing the probability of evidence Pr, or MPE a special case of MAP. Because of the complexity of MAP, and the lack of viable algorithms to approximate it,MAP computations are generally avoided by practitioners. This paper investigates the complexity of MAP. We show that MAP is complete for NP. We also provide negative complexity results for elimination based algorithms. It turns out that MAP remains hard even when MPE, and Pr are easy. We show that MAP is NPcomplete when the networks are restricted to polytrees, and even then can not be effectively approximated. Because there is no approximation algorithm with guaranteed results, we investigate best effort approximations. We introduce a generic MAP approximation framework. As one instantiation of it, we implement local search coupled with belief propagation BP to approximate MAP. We show how to extract approximate evidence retraction information from belief propagation which allows us to perform efficient local search. This allows MAP approximation even on networks that are too complex to even exactly solve the easier problems of computing Pr or MPE. Experimental results indicate that using BP and local search provides accurate MAP estimates in many cases. ",MAP Complexity Results and Approximation Methods " Quantification is well known to be a major obstacle in the construction of a probabilistic network, especially when relying on human experts for this purpose. The construction of a qualitative probabilistic network has been proposed as an initial step in a network s quantification, since the qualitative network can be used TO gain preliminary insight IN the projected networks reasoning behaviour. We extend on this idea and present a new type of network in which both signs and numbers are specified; we further present an associated algorithm for probabilistic inference. Building upon these semi-qualitative networks, a probabilistic network can be quantified and studied in a stepwise manner. As a result, modelling inadequacies can be detected and amended at an early stage in the quantification process. ",From Qualitative to Quantitative Probabilistic Networks " We present new algorithms for inference in credal networks --- directed acyclic graphs associated with sets of probabilities. Credal networks are here interpreted as encoding strong independence relations among variables. We first present a theory of credal networks based on separately specified sets of probabilities. We also show that inference with polytrees is NP-hard in this setting. We then introduce new techniques that reduce the computational effort demanded by inference, particularly in polytrees, by exploring separability of credal sets. ","Inference with Seperately Specified Sets of Probabilities in Credal Networks" " An increasing number of applications require real-time reasoning under uncertainty with streaming input. The temporal (dynamic) Bayes net formalism provides a powerful representational framework for such applications. However, existing exact inference algorithms for dynamic Bayes nets do not scale to the size of models required for real world applications which often contain hundreds or even thousands of variables for each time slice. In addition, existing algorithms were not developed with real-time processing in mind. We have developed a new computational approach to support real-time exact inference in large temporal Bayes nets. Our approach tackles scalability by recognizing that the complexity of the inference depends on the number of interface nodes between time slices and by exploiting the distinction between static and dynamic nodes in order to reduce the number of interface nodes and to factorize their joint probability distribution. We approach the real-time issue by organizing temporal Bayes nets into static representations, and then using the symbolic probabilistic inference algorithm to derive analytic expressions for the static representations. The parts of these expressions that do not change at each time step are pre-computed. The remaining parts are compiled into efficient procedural code so that the memory and CPU resources required by the inference are small and fixed. ",Real-Time Inference with Large-Scale Temporal Bayes Nets " We address the question of convergence in the loopy belief propagation (LBP) algorithm. Specifically, we relate convergence of LBP to the existence of a weak limit for a sequence of Gibbs measures defined on the LBP s associated computation tree.Using tools FROM the theory OF Gibbs measures we develop easily testable sufficient conditions FOR convergence.The failure OF convergence OF LBP implies the existence OF multiple phases FOR the associated Gibbs specification.These results give new insight INTO the mechanics OF the algorithm. ",Loopy Belief Propogation and Gibbs Measures " A popular approach to solving a decision process with non-Markovian rewards (NMRDP) is to exploit a compact representation of the reward function to automatically translate the NMRDP into an equivalent Markov decision process (MDP) amenable to our favorite MDP solution method. The contribution of this paper is a representation of non-Markovian reward functions and a translation into MDP aimed at making the best possible use of state-based anytime algorithms as the solution method. By explicitly constructing and exploring only parts of the state space, these algorithms are able to trade computation time for policy quality, and have proven quite effective in dealing with large MDPs. Our representation extends future linear temporal logic (FLTL) to express rewards. Our translation has the effect of embedding model-checking in the solution method. It results in an MDP of the minimal size achievable without stepping outside the anytime framework, and consequently in better policies by the deadline. ","Anytime State-Based Solution Methods for Decision Processes with non-Markovian Rewards" " The validity OF a causal model can be tested ONLY IF the model imposes constraints ON the probability distribution that governs the generated data. IN the presence OF unmeasured variables, causal models may impose two types OF constraints : conditional independencies, AS READ through the d - separation criterion, AND functional constraints, FOR which no general criterion IS available.This paper offers a systematic way OF identifying functional constraints AND, thus, facilitates the task OF testing causal models AS well AS inferring such models FROM data. ",On the Testable Implications of Causal Models with Hidden Variables " We propose an efficient method for Bayesian network inference in models with functional dependence. We generalize the multiplicative factorization method originally designed by Takikawa and D Ambrosio(1999) FOR models WITH independence OF causal influence.Using a hidden variable, we transform a probability potential INTO a product OF two - dimensional potentials.The multiplicative factorization yields more efficient inference. FOR example, IN junction tree propagation it helps TO avoid large cliques. IN ORDER TO keep potentials small, the number OF states OF the hidden variable should be minimized.We transform this problem INTO a combinatorial problem OF minimal base IN a particular space.We present an example OF a computerized adaptive test, IN which the factorization method IS significantly more efficient than previous inference methods. ",Exploiting Functional Dependence in Bayesian Network Inference " This paper uses decision-theoretic principles to obtain new insights into the assessment and updating of probabilities. First, a new foundation of Bayesianism is given. It does not require infinite atomless uncertainties as did Savage s classical result, AND can therefore be applied TO ANY finite Bayesian network.It neither requires linear utility AS did de Finetti s classical result, AND r ntherefore allows FOR the empirically AND normatively desirable risk r naversion.Finally, BY identifying AND fixing utility IN an elementary r nmanner, our result can readily be applied TO identify methods OF r nprobability updating.Thus, a decision - theoretic foundation IS given r nto the computationally efficient method OF inductive reasoning r ndeveloped BY Rudolf Carnap.Finally, recent empirical findings ON r nprobability assessments are discussed.It leads TO suggestions FOR r ncorrecting biases IN probability assessments, AND FOR an alternative r nto the Dempster - Shafer belief functions that avoids the reduction TO r ndegeneracy after multiple updatings.r n ","Decision Principles to justify Carnap's Updating Method and to Suggest Corrections of Probability Judgments (Invited Talks)" " We select policies for large Markov Decision Processes (MDPs) with compact first-order representations. We find policies that generalize well as the number of objects in the domain grows, potentially without bound. Existing dynamic-programming approaches based on flat, propositional, or first-order representations either are impractical here or do not naturally scale as the number of objects grows without bound. We implement and evaluate an alternative approach that induces first-order policies using training data constructed by solving small problem instances using PGraphplan (Blum & Langford, 1999). Our policies are represented as ensembles of decision lists, using a taxonomic concept language. This approach extends the work of Martin and Geffner (2000) to stochastic domains, ensemble learning, and a wider variety of problems. Empirically, we find ""good"" policies for several stochastic first-order MDPs that are beyond the scope of previous approaches. We also discuss the application of this work to the relational reinforcement-learning problem. ",Inductive Policy Selection for First-Order MDPs " NP-SPEC is a language for specifying problems in NP in a declarative way. Despite the fact that the semantics of the language was given by referring to Datalog with circumscription, which is very close to ASP, so far the only existing implementations are by means of ECLiPSe Prolog and via Boolean satisfiability solvers. In this paper, we present translations from NP-SPEC into various forms of ASP and analyze them. We also argue that it might be useful to incorporate certain language constructs of NP-SPEC into mainstream ASP. ",Translating NP-SPEC into ASP " In this paper we continue the work on our extension of Answer Set Programming by non-Herbrand functions and add to the language support for arithmetic expressions and various inequality relations over non-Herbrand functions, as well as consistency-restoring rules from CR-Prolog. We demonstrate the use of this latest version of the language in the representation of important kinds of knowledge. ","Language ASP{f} with Arithmetic Expressions and Consistency-Restoring Rules" " Within the area of computational models of argumentation, the instantiation-based approach is gaining more and more attention, not at least because meaningful input for Dung's abstract frameworks is provided in that way. In a nutshell, the aim of instantiation-based argumentation is to form, from a given knowledge base, a set of arguments and to identify the conflicts between them. The resulting network is then evaluated by means of extension-based semantics on an abstract level, i.e. on the resulting graph. While several systems are nowadays available for the latter step, the automation of the instantiation process itself has received less attention. In this work, we provide a novel approach to construct and visualize an argumentation framework from a given knowledge base. The system we propose relies on Answer-Set Programming and follows a two-step approach. A first program yields the logic-based arguments as its answer-sets; a second program is then used to specify the relations between arguments based on the answer-sets of the first program. As it turns out, this approach not only allows for a flexible and extensible tool for instantiation-based argumentation, but also provides a new method for answer-set visualization in general. ",Utilizing ASP for Generating and Visualizing Argumentation Frameworks " In this paper we present an Action Language-Answer Set Programming based approach to solving planning and scheduling problems in hybrid domains - domains that exhibit both discrete and continuous behavior. We use action language H to represent the domain and then translate the resulting theory into an A-Prolog program. In this way, we reduce the problem of finding solutions to planning and scheduling problems to computing answer sets of A-Prolog programs. We cite a planning and scheduling example from the literature and show how to model it in H. We show how to translate the resulting H theory into an equivalent A-Prolog program. We compute the answer sets of the resulting program using a hybrid solver called EZCSP which loosely integrates a constraint solver with an answer set solver. The solver allows us reason about constraints over reals and compute solutions to complex planning and scheduling problems. Results have shown that our approach can be applied to any planning and scheduling problem in hybrid domains. ",Planning and Scheduling in Hybrid Domains Using Answer Set Programming " The advance of Internet and Sensor technology has brought about new challenges evoked by the emergence of continuous data streams. Beyond rapid data processing, application areas like ambient assisted living, robotics, or dynamic scheduling involve complex reasoning tasks. We address such scenarios and elaborate upon approaches to knowledge-intense stream reasoning, based on Answer Set Programming (ASP). While traditional ASP methods are devised for singular problem solving, we develop new techniques to formulate and process problems dealing with emerging as well as expiring data in a seamless way. ",Answer Set Programming for Stream Reasoning " DL-Lite is an important family of description logics. Recently, there is an increasing interest in handling inconsistency in DL-Lite as the constraint imposed by a TBox can be easily violated by assertions in ABox in DL-Lite. In this paper, we present a distance-based paraconsistent semantics based on the notion of feature in DL-Lite, which provides a novel way to rationally draw meaningful conclusions even from an inconsistent knowledge base. Finally, we investigate several important logical properties of this entailment relation based on the new semantics and show its promising advantages in non-monotonic reasoning for DL-Lite. ",A Distance-based Paraconsistent Semantics for DL-Lite " This paper presents a novel approach based on variable forgetting, which is a useful tool in resolving contradictory by filtering some given variables, to merging multiple knowledge bases. This paper first builds a relationship between belief merging and variable forgetting by using dilation. Variable forgetting is applied to capture belief merging operation. Finally, some new merging operators are developed by modifying candidate variables to amend the shortage of traditional merging operators. Different from model selection of traditional merging operators, as an alternative approach, variable selection in those new operators could provide intuitive information about an atom variable among whole knowledge bases. ",A Forgetting-based Approach to Merging Knowledge Bases " In an open, constantly changing and collaborative environment like the forthcoming Semantic Web, it is reasonable to expect that knowledge sources will contain noise and inaccuracies. It is well known, as the logical foundation of the Semantic Web, description logic is lack of the ability of tolerating inconsistent or incomplete data. Recently, the ability of paraconsistent approaches in Semantic Web is weaker in this paper, we present a tableau algorithm based on sign transformation in Semantic Web which holds the stronger ability of reasoning. We prove that the tableau algorithm is decidable which hold the same function of classical tableau algorithm for consistent knowledge bases. ","A Paraconsistent Tableau Algorithm Based on Sign Transformation in Semantic Web" " This volume contains the papers presented at the fifth workshop on Answer Set Programming and Other Computing Paradigms (ASPOCP 2012) held on September 4th, 2012 in Budapest, co-located with the 28th International Conference on Logic Programming (ICLP 2012). It thus continues a series of previous events co-located with ICLP, aiming at facilitating the discussion about crossing the boundaries of current ASP techniques in theory, solving, and applications, in combination with or inspired by other computing paradigms. ","Proceedings of Answer Set Programming and Other Computing Paradigms (ASPOCP 2012), 5th International Workshop, September 4, 2012, Budapest, Hungary" " We present a general framework for defining priors on model structure and sampling from the posterior using the Metropolis-Hastings algorithm. The key idea is that structure priors are defined via a probability tree and that the proposal mechanism for the Metropolis-Hastings algorithm operates by traversing this tree, thereby defining a cheaply computable acceptance probability. We have applied this approach to Bayesian net structure learning using a number of priors and tree traversal strategies. Our results show that these must be chosen appropriately for this approach to be successful. ",Markov Chain Monte Carlo using Tree-Based Priors on Model Structure " Possibility theory offers either a qualitive, or a numerical framework for representing uncertainty, in terms of dual measures of possibility and necessity. This leads to the existence of two kinds of possibilistic causal graphs where the conditioning is either based on the minimum, or the product operator. Benferhat et al. (1999) have investigated the connections between min-based graphs and possibilistic logic bases (made of classical formulas weighted in terms of certainty). This paper deals with a more difficult issue : the product-based graphical representations of possibilistic bases, which provides an easy structural reading of possibilistic bases. Moreover, this paper also provides another reading of possibilistic bases in terms of comparative preferences of the form ""in the context p, q is preferred to not q"". This enables us to explicit preferences underlying a set of goals with different levels of priority. ",Graphical readings of possibilistic logic bases " This paper presents a sound and completecalculus for causal relevance, based onPearl's functional models semantics.The calculus consists of axioms and rulesof inference for reasoning about causalrelevance relationships.We extend the set of known axioms for causalrelevance with three new axioms, andintroduce two new rules of inference forreasoning about specific subclasses ofmodels.These subclasses give a more refinedcharacterization of causal models than the one given in Halpern's axiomatizationof counterfactual reasoning.Finally, we show how the calculus for causalrelevance can be used in the task ofidentifying causal structure from non-observational data. ",A Calculus for Causal Relevance " We propose a new directed graphical representation of utility functions, called UCP-networks, that combines aspects of two existing graphical models: generalized additive models and CP-networks. The network decomposes a utility function into a number of additive factors, with the directionality of the arcs reflecting conditional dependence of preference statements - in the underlying (qualitative) preference ordering - under a {em ceteris paribus} (all else being equal) interpretation. This representation is arguably natural in many settings. Furthermore, the strong CP-semantics ensures that computation of optimization and dominance queries is very efficient. We also demonstrate the value of this representation in decision making. Finally, we describe an interactive elicitation procedure that takes advantage of the linear nature of the constraints on ""`tradeoff weights"" imposed by a UCP-network. This procedure allows the network to be refined until the regret of the decision with minimax regret (with respect to the incompletely specified utility function) falls below a specified threshold (e.g., the cost of further questioning. ","UCP-Networks: A Directed Graphical Representation of Conditional Utilities" " We present two sampling algorithms for probabilistic confidence inference in Bayesian networks. These two algorithms (we call them AIS-BN-mu and AIS-BN-sigma algorithms) guarantee that estimates of posterior probabilities are with a given probability within a desired precision bound. Our algorithms are based on recent advances in sampling algorithms for (1) estimating the mean of bounded random variables and (2) adaptive importance sampling in Bayesian networks. In addition to a simple stopping rule for sampling that they provide, the AIS-BN-mu and AIS-BN-sigma algorithms are capable of guiding the learning process in the AIS-BN algorithm. An empirical evaluation of the proposed algorithms shows excellent performance, even for very unlikely evidence. ",Confidence Inference in Bayesian Networks " It is ""well known"" that in linear models: (1) testable constraints on the marginal distribution of observed variables distinguish certain cases in which an unobserved cause jointly influences several observed variables; (2) the technique of ""instrumental variables"" sometimes permits an estimation of the influence of one variable on another even when the association between the variables may be confounded by unobserved common causes; (3) the association (or conditional probability distribution of one variable given another) of two variables connected by a path or trek can be computed directly from the parameter values associated with each edge in the path or trek; (4) the association of two variables produced by multiple treks can be computed from the parameters associated with each trek; and (5) the independence of two variables conditional on a third implies the corresponding independence of the sums of the variables over all units conditional on the sums over all units of each of the original conditioning variables.These properties are exploited in search procedures. It is also known that properties (2)-(5) do not hold for all Bayes nets with binary variables. We show that (1) holds for all Bayes nets with binary variables and (5) holds for all singly trek-connected Bayes nets of that kind. We further show that all five properties hold for Bayes nets with any DAG and binary variables parameterized with noisy-or and noisy-and gates. ",Linearity Properties of Bayes Nets with Binary Variables " This paper explores algorithms for processing probabilistic and deterministic information when the former is represented as a belief network and the latter as a set of boolean clauses. The motivating tasks are 1. evaluating beliefs networks having a large number of deterministic relationships and2. evaluating probabilities of complex boolean querie over a belief network. We propose a parameterized family of variable elimination algorithms that exploit both types of information, and that allows varying levels of constraint propagation inferences. The complexity of the scheme is controlled by the induced-width of the graph {em augmented} by the dependencies introduced by the boolean constraints. Preliminary empirical evaluation demonstrate the effect of constraint propagation on probabilistic computation. ",Hybrid Processing of Beliefs and Constraints " In this paper we analyze two recent axiomatic approaches proposed by Dubois et al and by Giang and Shenoy to qualitative decision making where uncertainty is described by possibility theory. Both axiomtizations are inspired by von Neumann and Morgenstern's system of axioms for the case of probability theory. We show that our approach naturally unifies two axiomatic systems that correspond respectively to pessimistic and optimistic decision criteria proposed by Dubois et al. The simplifying unification is achieved by (i) replacing axioms that are supposed to reflect two informational attitudes (uncertainty aversion and uncertainty attraction) by an axiom that imposes order on set of standard lotteries and (ii) using a binary utility scale in which each utility level is represented by a pair of numbers. ","A Comparison of Axiomatic Approaches to Qualitative Decision Making Using Possibility Theory" " Graphical Markov models determined by acyclic digraphs (ADGs), also called directed acyclic graphs (DAGs), are widely studied in statistics, computer science (as Bayesian networks), operations research (as influence diagrams), and many related fields. Because different ADGs may determine the same Markov equivalence class, it long has been of interest to determine the efficiency gained in model specification and search by working directly with Markov equivalence classes of ADGs rather than with ADGs themselves. A computer program was written to enumerate the equivalence classes of ADG models as specified by Pearl & Verma's equivalence criterion. The program counted equivalence classes for models up to and including 10 vertices. The ratio of number of classes to ADGs appears to approach an asymptote of about 0.267. Classes were analyzed according to number of edges and class size. By edges, the distribution of number of classes approaches a Gaussian shape. By class size, classes of size 1 are most common, with the proportions for larger sizes initially decreasing but then following a more irregular pattern. The maximum number of classes generated by any undirected graph was found to increase approximately factorially. The program also includes a new variation of orderly algorithm for generating undirected graphs. ",Enumerating Markov Equivalence Classes of Acyclic Digraph Models " In previous work cite{Ha98:Towards} we presented a case-based approach to eliciting and reasoning with preferences. A key issue in this approach is the definition of similarity between user preferences. We introduced the probabilistic distance as a measure of similarity on user preferences, and provided an algorithm to compute the distance between two partially specified {em value} functions. This is for the case of decision making under {em certainty}. In this paper we address the more challenging issue of computing the probabilistic distance in the case of decision making under{em uncertainty}. We provide an algorithm to compute the probabilistic distance between two partially specified {em utility} functions. We demonstrate the use of this algorithm with a medical data set of partially specified patient preferences,where none of the other existing distancemeasures appear definable. Using this data set, we also demonstrate that the case-based approach to preference elicitation isapplicable in domains with uncertainty. Finally, we provide a comprehensive analytical comparison of the probabilistic distance with some existing distance measures on preferences. ","Similarity Measures on Preference Structures, Part II: Utility Functions" " We propose a new definition of actual causes, using structural equations to model counterfactuals.We show that the definitions yield a plausible and elegant account ofcausation that handles well examples which have caused problems forother definitions and resolves major difficulties in the traditionalaccount. In a companion paper, we show how the definition of causality can beused to give an elegant definition of (causal) explanation. ",Causes and Explanations: A Structural-Model Approach --- Part 1: Causes " We are developing a general framework for using learned Bayesian models for decision-theoretic control of search and reasoningalgorithms. We illustrate the approach on the specific task of controlling both general and domain-specific solvers on a hard class of structured constraint satisfaction problems. A successful strategyfor reducing the high (and even infinite) variance in running time typically exhibited by backtracking search algorithms is to cut off and restart the search if a solution is not found within a certainamount of time. Previous work on restart strategies have employed fixed cut off values. We show how to create a dynamic cut off strategy by learning a Bayesian model that predicts the ultimate length of a trial based on observing the early behavior of the search algorithm. Furthermore, we describe the general conditions under which a dynamic restart strategy can outperform the theoretically optimal fixed strategy. ",A Bayesian Approach to Tackling Hard Computational Problems " Every directed acyclic graph (DAG) over a finite non-empty set of variables (= nodes) N induces an independence model over N, which is a list of conditional independence statements over N.The inclusion problem is how to characterize (in graphical terms) whether all independence statements in the model induced by a DAG K are in the model induced by a second DAG L. Meek (1997) conjectured that this inclusion holds iff there exists a sequence of DAGs from L to K such that only certain 'legal' arrow reversal and 'legal' arrow adding operations are performed to get the next DAG in the sequence.In this paper we give several characterizations of inclusion of DAG models and verify Meek's conjecture in the case that the DAGs K and L differ in at most one adjacency. As a warming up a rigorous proof of well-known graphical characterizations of equivalence of DAGs, which is a highly related problem, is given. ",On characterizing Inclusion of Bayesian Networks " This article deals with plausible reasoning from incomplete knowledge about large-scale spatial properties. The availableinformation, consisting of a set of pointwise observations,is extrapolated to neighbour points. We make use of belief functions to represent the influence of the knowledge at a given point to another point; the quantitative strength of this influence decreases when the distance between both points increases. These influences arethen aggregated using a variant of Dempster's rule of combination which takes into account the relative dependence between observations. ",Plausible reasoning from spatial observations " This paper considers the problem of knowledge-based model construction in the presence of uncertainty about the association of domain entities to random variables. Multi-entity Bayesian networks (MEBNs) are defined as a representation for knowledge in domains characterized by uncertainty in the number of relevant entities, their interrelationships, and their association with observables. An MEBN implicitly specifies a probability distribution in terms of a hierarchically structured collection of Bayesian network fragments that together encode a joint probability distribution over arbitrarily many interrelated hypotheses. Although a finite query-complete model can always be constructed, association uncertainty typically makes exact model construction and evaluation intractable. The objective of hypothesis management is to balance tractability against accuracy. We describe an application to the problem of using intelligence reports to infer the organization and activities of groups of military vehicles. Our approach is compared to related work in the tracking and fusion literature. ",Hypothesis Management in Situation-Specific Network Construction " An important subclass of hybrid Bayesian networks are those that represent Conditional Linear Gaussian (CLG) distributions --- a distribution with a multivariate Gaussian component for each instantiation of the discrete variables. In this paper we explore the problem of inference in CLGs. We show that inference in CLGs can be significantly harder than inference in Bayes Nets. In particular, we prove that even if the CLG is restricted to an extremely simple structure of a polytree in which every continuous node has at most one discrete ancestor, the inference task is NP-hard.To deal with the often prohibitive computational cost of the exact inference algorithm for CLGs, we explore several approximate inference algorithms. These algorithms try to find a small subset of Gaussians which are a good approximation to the full mixture distribution. We consider two Monte Carlo approaches and a novel approach that enumerates mixture components in order of prior probability. We compare these methods on a variety of problems and show that our novel algorithm is very promising for large, hybrid diagnosis problems. ","Inference in Hybrid Networks: Theoretical Limits and Practical Algorithms" " Many real life domains contain a mixture of discrete and continuous variables and can be modeled as hybrid Bayesian Networks. Animportant subclass of hybrid BNs are conditional linear Gaussian (CLG) networks, where the conditional distribution of the continuous variables given an assignment to the discrete variables is a multivariate Gaussian. Lauritzen's extension to the clique tree algorithm can be used for exact inference in CLG networks. However, many domains also include discrete variables that depend on continuous ones, and CLG networks do not allow such dependencies to berepresented. No exact inference algorithm has been proposed for these enhanced CLG networks. In this paper, we generalize Lauritzen's algorithm, providing the first ""exact"" inference algorithm for augmented CLG networks - networks where continuous nodes are conditional linear Gaussians but that also allow discrete children ofcontinuous parents. Our algorithm is exact in the sense that it computes the exact distributions over the discrete nodes, and the exact first and second moments of the continuous ones, up to the accuracy obtained by numerical integration used within thealgorithm. When the discrete children are modeled with softmax CPDs (as is the case in many real world domains) the approximation of the continuous distributions using the first two moments is particularly accurate. Our algorithm is simple to implement and often comparable in its complexity to Lauritzen's algorithm. We show empirically that it achieves substantially higher accuracy than previous approximate algorithms. ",Exact Inference in Networks with Discrete Children of Continuous Parents " We present probabilistic logic programming under inheritance with overriding. This approach is based on new notions of entailment for reasoning with conditional constraints, which are obtained from the classical notion of logical entailment by adding the principle of inheritance with overriding. This is done by using recent approaches to probabilistic default reasoning with conditional constraints. We analyze the semantic properties of the new entailment relations. We also present algorithms for probabilistic logic programming under inheritance with overriding, and program transformations for an increased efficiency. ",Probabilistic Logic Programming under Inheritance with Overriding " In this paper we compare three different architectures for the evaluation of influence diagrams: HUGIN, Shafer-Shenoy, and Lazy Evaluation architecture. The computational complexity of the architectures are compared on the LImited Memory Influence Diagram (LIMID): a diagram where only the requiste information for the computation of the optimal policies are depicted. Because the requsite information is explicitly represented in the LIMID the evaluation can take advantage of it, and significant savings in computational can be obtained. In this paper we show how the obtained savings is considerably increased when the computations performed on the LIMID is according to the Lazy Evaluation scheme. ","Solving Influence Diagrams using HUGIN, Shafer-Shenoy and Lazy Propagation" " We consider the task of aggregating beliefs of severalexperts. We assume that these beliefs are represented as probabilitydistributions. We argue that the evaluation of any aggregationtechnique depends on the semantic context of this task. We propose aframework, in which we assume that nature generates samples from a`true' distribution and different experts form their beliefs based onthe subsets of the data they have a chance to observe. Naturally, theideal aggregate distribution would be the one learned from thecombined sample sets. Such a formulation leads to a natural way tomeasure the accuracy of the aggregation mechanism.We show that the well-known aggregation operator LinOP is ideallysuited for that task. We propose a LinOP-based learning algorithm,inspired by the techniques developed for Bayesian learning, whichaggregates the experts' distributions represented as Bayesiannetworks. Our preliminary experiments show that this algorithmperforms well in practice. ",Aggregating Learned Probabilistic Beliefs " We propose using recognition networks for approximate inference inBayesian networks (BNs). A recognition network is a multilayerperception (MLP) trained to predict posterior marginals given observedevidence in a particular BN. The input to the MLP is a vector of thestates of the evidential nodes. The activity of an output unit isinterpreted as a prediction of the posterior marginal of thecorresponding variable. The MLP is trained using samples generated fromthe corresponding BN.We evaluate a recognition network that was trained to do inference ina large Bayesian network, similar in structure and complexity to theQuick Medical Reference, Decision Theoretic (QMR-DT). Our networkis a binary, two-layer, noisy-OR network containing over 4000 potentially observable nodes and over 600 unobservable, hidden nodes. Inreal medical diagnosis, most observables are unavailable, and there isa complex and unknown bias that selects which ones are provided. Weincorporate a very basic type of selection bias in our network: a knownpreference that available observables are positive rather than negative.Even this simple bias has a significant effect on the posterior. We compare the performance of our recognition network tostate-of-the-art approximate inference algorithms on a large set oftest cases. In order to evaluate the effect of our simplistic modelof the selection bias, we evaluate algorithms using a variety ofincorrectly modeled observation biases. Recognition networks performwell using both correct and incorrect observation biases. ",Recognition Networks for Approximate Inference in BN20 Networks " The Factored Frontier (FF) algorithm is a simple approximate inferencealgorithm for Dynamic Bayesian Networks (DBNs). It is very similar tothe fully factorized version of the Boyen-Koller (BK) algorithm, butinstead of doing an exact update at every step followed bymarginalisation (projection), it always works with factoreddistributions. Hence it can be applied to models for which the exactupdate step is intractable. We show that FF is equivalent to (oneiteration of) loopy belief propagation (LBP) on the original DBN, andthat BK is equivalent (to one iteration of) LBP on a DBN where wecluster some of the nodes. We then show empirically that byiterating, LBP can improve on the accuracy of both FF and BK. Wecompare these algorithms on two real-world DBNs: the first is a modelof a water treatment plant, and the second is a coupled HMM, used tomodel freeway traffic. ",The Factored Frontier Algorithm for Approximate Inference in DBNs " Most successful Bayesian network (BN) applications to datehave been built through knowledge elicitation from experts.This is difficult and time consuming, which has lead to recentinterest in automated methods for learning BNs from data. We present a case study in the construction of a BN in anintelligent tutoring application, specifically decimal misconceptions. Wedescribe the BN construction using expert elicitation and then investigate how certainexisting automated knowledge discovery methods might support the BN knowledge engineering process. ","A Case Study in Knowledge Discovery and Elicitation in an Intelligent Tutoring Application" " MAP is the problem of finding a most probable instantiation of a set of variables in a Bayesian network, given evidence. Unlike computing marginals, posteriors, and MPE (a special case of MAP), the time and space complexity of MAP is not only exponential in the network treewidth, but also in a larger parameter known as the ""constrained"" treewidth. In practice, this means that computing MAP can be orders of magnitude more expensive than computingposteriors or MPE. Thus, practitioners generally avoid MAP computations, resorting instead to approximating them by the most likely value for each MAP variableseparately, or by MPE.We present a method for approximating MAP using local search. This method has space complexity which is exponential onlyin the treewidth, as is the complexity of each search step. We investigate the effectiveness of different local searchmethods and several initialization strategies and compare them to otherapproximation schemes.Experimental results show that local search provides a much more accurate approximation of MAP, while requiring few search steps.Practically, this means that the complexity of local search is often exponential only in treewidth as opposed to the constrained treewidth, making approximating MAP as efficient as other computations. ",Approximating MAP using Local Search " Suppose we are given the conditional probability of one variable given some other variables.Normally the full joint distribution over the conditioning variablesis required to determine the probability of the conditioned variable.Under what circumstances are the marginal distributions over the conditioning variables sufficient to determine the probability ofthe conditioned variable?Sufficiency in this sense is equivalent to additive separability ofthe conditional probability distribution.Such separability structure is natural and can be exploited forefficient inference.Separability has a natural generalization to conditional separability.Separability provides a precise notion of weaklyinteracting subsystems in temporal probabilistic models.Given a system that is decomposed into separable subsystems, exactmarginal probabilities over subsystems at future points in time can becomputed by propagating marginal subsystem probabilities, rather thancomplete system joint probabilities.Thus, separability can make exact prediction tractable.However, observations can break separability,so exact monitoring of dynamic systems remains hard. ","Sufficiency, Separability and Temporal Probabilistic Models" " There is increasing interest within the research community in the design and use of recursive probability models. Although there still remains concern about computational complexity costs and the fact that computing exact solutions can be intractable for many nonrecursive models and impossible in the general case for recursive problems, several research groups are actively developing computational techniques for recursive stochastic languages. We have developed an extension to the traditional lambda-calculus as a framework for families of Turing complete stochastic languages. We have also developed a class of exact inference algorithms based on the traditional reductions of the lambda-calculus. We further propose that using the deBruijn notation (a lambda-calculus notation with nameless dummies) supports effective caching in such systems (caching being an essential component of efficient computation). Finally, our extension to the lambda-calculus offers a foundation and general theory for the construction of recursive stochastic modeling languages as well as promise for effective caching and efficient approximation algorithms for inference. ",Toward General Analysis of Recursive Probability Models " We propose a new approach to value-directed belief state approximation for POMDPs. The value-directed model allows one to choose approximation methods for belief state monitoring that have a small impact on decision quality. Using a vector space analysis of the problem, we devise two new search procedures for selecting an approximation scheme that have much better computational properties than existing methods. Though these provide looser error bounds, we show empirically that they have a similar impact on decision quality in practice, and run up to two orders of magnitude more quickly. ",Vector-space Analysis of Belief-state Approximation for POMDPs " We consider the problem of approximate belief-state monitoring using particle filtering for the purposes of implementing a policy for a partially-observable Markov decision process (POMDP). While particle filtering has become a widely-used tool in AI for monitoring dynamical systems, rather scant attention has been paid to their use in the context of decision making. Assuming the existence of a value function, we derive error bounds on decision quality associated with filtering using importance sampling. We also describe an adaptive procedure that can be used to dynamically determine the number of samples required to meet specific error bounds. Empirical evidence is offered supporting this technique as a profitable means of directing sampling effort where it is needed to distinguish policies. ",Value-Directed Sampling Methods for POMDPs " We investigate a model for planning under uncertainty with temporallyextended actions, where multiple actions can be taken concurrently at each decision epoch. Our model is based on the options framework, and combines it with factored state space models,where the set of options can be partitioned into classes that affectdisjoint state variables. We show that the set of decisionepochs for concurrent options defines a semi-Markov decisionprocess, if the underlying temporally extended actions being parallelized arerestricted to Markov options. This property allows us to use SMDPalgorithms for computing the value function over concurrentoptions. The concurrent options model allows overlapping execution ofoptions in order to achieve higher performance or in order to performa complex task. We describe a simple experiment using a navigationtask which illustrates how concurrent options results in a faster planwhen compared to the case when only one option is taken at a time. ",Decision-Theoretic Planning with Concurrent Temporally Extended Actions " We consider a partially observable Markov decision problem (POMDP) that models a class of sequencing problems. Although POMDPs are typically intractable, our formulation admits tractable solution. Instead of maintaining a value function over a high-dimensional set of belief states, we reduce the state space to one of smaller dimension, in which grid-based dynamic programming techniques are effective. We develop an error bound for the resulting approximation, and discuss an application of the model to a problem in targeted advertising. ",A Tractable POMDP for a Class of Sequencing Problems " We propose a new method of discovering causal structures, based on the detection of local, spontaneous changes in the underlying data-generating model. We analyze the classes of structures that are equivalent relative to a stream of distributions produced by local changes, and devise algorithms that output graphical representations of these equivalence classes. We present experimental results, using simulated data, and examine the errors associated with detection of changes and recovery of structures. ",Causal Discovery from Changes " A Bayesian Belief Network (BN) is a model of a joint distribution over a setof n variables, with a DAG structure to represent the immediate dependenciesbetween the variables, and a set of parameters (aka CPTables) to represent thelocal conditional probabilities of a node, given each assignment to itsparents. In many situations, these parameters are themselves random variables - this may reflect the uncertainty of the domain expert, or may come from atraining sample used to estimate the parameter values. The distribution overthese ""CPtable variables"" induces a distribution over the response the BNwill return to any ""What is Pr(H | E)?"" query. This paper investigates thevariance of this response, showing first that it is asymptotically normal,then providing its mean and asymptotical variance. We then present aneffective general algorithm for computing this variance, which has the samecomplexity as simply computing the (mean value of) the response itself - ie,O(n 2^w), where n is the number of variables and w is the effective treewidth. Finally, we provide empirical evidence that this algorithm, whichincorporates assumptions and approximations, works effectively in practice,given only small samples. ",Bayesian Error-Bars for Belief Net Inference " With the advance of efficient analytical methods for sensitivity analysis ofprobabilistic networks, the interest in the sensitivities revealed by real-life networks is rekindled. As the amount of data resulting from a sensitivity analysis of even a moderately-sized network is alreadyoverwhelming, methods for extracting relevant information are called for. One such methodis to study the derivative of the sensitivity functions yielded for a network's parameters. We further propose to build upon the concept of admissible deviation, that is, the extent to which a parameter can deviate from the true value without inducing a change in the most likely outcome. We illustrate these concepts by means of a sensitivity analysis of a real-life probabilistic network in oncology. ",Analysing Sensitivity Data from Probabilistic Networks " Uncertainty plays a central role in spoken dialogue systems. Some stochastic models like Markov decision process (MDP) are used to model the dialogue manager. But the partially observable system state and user intention hinder the natural representation of the dialogue state. MDP-based system degrades fast when uncertainty about a user's intention increases. We propose a novel dialogue model based on the partially observable Markov decision process (POMDP). We use hidden system states and user intentions as the state set, parser results and low-level information as the observation set, domain actions and dialogue repair actions as the action set. Here the low-level information is extracted from different input modals, including speech, keyboard, mouse, etc., using Bayesian networks. Because of the limitation of the exact algorithms, we focus on heuristic approximation algorithms and their applicability in POMDP for dialogue management. We also propose two methods for grid point selection in grid-based approximation algorithms. ","Planning and Acting under Uncertainty: A New Model for Spoken Dialogue Systems" " Recently, there has been a burst in the number of research projects on human computation via crowdsourcing. Multiple choice (or labeling) questions could be referred to as a common type of problem which is solved by this approach. As an application, crowd labeling is applied to find true labels for large machine learning datasets. Since crowds are not necessarily experts, the labels they provide are rather noisy and erroneous. This challenge is usually resolved by collecting multiple labels for each sample, and then aggregating them to estimate the true label. Although the mechanism leads to high-quality labels, it is not actually cost-effective. As a result, efforts are currently made to maximize the accuracy in estimating true labels, while fixing the number of acquired labels. This paper surveys methods to aggregate redundant crowd labels in order to estimate unknown true labels. It presents a unified statistical latent model where the differences among popular methods in the field correspond to different choices for the parameters of the model. Afterwards, algorithms to make inference on these models will be surveyed. Moreover, adaptive methods which iteratively collect labels based on the previously collected labels and estimated models will be discussed. In addition, this paper compares the distinguished methods, and provides guidelines for future work required to address the current open issues. ",Crowd Labeling: a survey " In this paper we present a propositional logic programming language for reasoning under possibilistic uncertainty and representing vague knowledge. Formulas are represented by pairs (A, c), where A is a many-valued proposition and c is value in the unit interval [0,1] which denotes a lower bound on the belief on A in terms of necessity measures. Belief states are modeled by possibility distributions on the set of all many-valued interpretations. In this framework, (i) we define a syntax and a semantics of the general underlying uncertainty logic; (ii) we provide a modus ponens-style calculus for a sublanguage of Horn-rules and we prove that it is complete for determining the maximum degree of possibilistic belief with which a fuzzy propositional variable can be entailed from a set of formulas; and finally, (iii) we show how the computation of a partial matching between fuzzy propositional variables, in terms of necessity measures for fuzzy sets, can be included in our logic programming system. ","A Complete Calculus for Possibilistic Logic Programming with Fuzzy Propositional Variables" " We show that if a strictly positive joint probability distribution for a set of binary random variables factors according to a tree, then vertex separation represents all and only the independence relations enclosed in the distribution. The same result is shown to hold also for multivariate strictly positive normal distributions. Our proof uses a new property of conditional independence that holds for these two classes of probability distributions. ",Perfect Tree-Like Markovian Distributions " Possibilistic logic offers a qualitative framework for representing pieces of information associated with levels of uncertainty of priority. The fusion of multiple sources information is discussed in this setting. Different classes of merging operators are considered including conjunctive, disjunctive, reinforcement, adaptive and averaging operators. Then we propose to analyse these classes in terms of postulates. This is done by first extending the postulate for merging classical bases to the case where priorites are avaialbe. ",A Principled Analysis of Merging Operations in Possibilistic Logic " Planning for distributed agents with partial state information is considered from a decision- theoretic perspective. We describe generalizations of both the MDP and POMDP models that allow for decentralized control. For even a small number of agents, the finite-horizon problems corresponding to both of our models are complete for nondeterministic exponential time. These complexity results illustrate a fundamental difference between centralized and decentralized control of Markov processes. In contrast to the MDP and POMDP problems, the problems we consider provably do not admit polynomial-time algorithms and most likely require doubly exponential time to solve in the worst case. We have thus provided mathematical evidence corresponding to the intuition that decentralized planning problems cannot easily be reduced to centralized problems and solved exactly using established techniques. ",The Complexity of Decentralized Control of Markov Decision Processes " Monitoring plan preconditions can allow for replanning when a precondition fails, generally far in advance of the point in the plan where the precondition is relevant. However, monitoring is generally costly, and some precondition failures have a very small impact on plan quality. We formulate a model for optimal precondition monitoring, using partially-observable Markov decisions processes, and describe methods for solving this model efficitively, though approximately. Specifically, we show that the single-precondition monitoring problem is generally tractable, and the multiple-precondition monitoring policies can be efficitively approximated using single-precondition soultions. ",Approximately Optimal Monitoring of Plan Preconditions " Monte Carlo sampling has become a major vehicle for approximate inference in Bayesian networks. In this paper, we investigate a family of related simulation approaches, known collectively as quasi-Monte Carlo methods based on deterministic low-discrepancy sequences. We first outline several theoretical aspects of deterministic low-discrepancy sequences, show three examples of such sequences, and then discuss practical issues related to applying them to belief updating in Bayesian networks. We propose an algorithm for selecting direction numbers for Sobol sequence. Our experimental results show that low-discrepancy sequences (especially Sobol sequence) significantly improve the performance of simulation algorithms in Bayesian networks compared to Monte Carlo sampling. ","Computational Investigation of Low-Discrepancy Sequences in Simulation Algorithms for Bayesian Networks" " A simple advertising strategy that can be used to help increase sales of a product is to mail out special offers to selected potential customers. Because there is a cost associated with sending each offer, the optimal mailing strategy depends on both the benefit obtained from a purchase and how the offer affects the buying behavior of the customers. In this paper, we describe two methods for partitioning the potential customers into groups, and show how to perform a simple cost-benefit analysis to decide which, if any, of the groups should be targeted. In particular, we consider two decision-tree learning algorithms. The first is an ""off the shelf"" algorithm used to model the probability that groups of customers will buy the product. The second is a new algorithm that is similar to the first, except that for each group, it explicitly models the probability of purchase under the two mailing scenarios: (1) the mail is sent to members of that group and (2) the mail is not sent to members of that group. Using data from a real-world advertising experiment, we compare the algorithms to each other and to a naive mail-to-all strategy. ",A Decision Theoretic Approach to Targeted Advertising " This paper describes a Bayesian method for learning causal networks using samples that were selected in a non-random manner from a population of interest. Examples of data obtained by non-random sampling include convenience samples and case-control data in which a fixed number of samples with and without some condition is collected; such data are not uncommon. The paper describes a method for combining data under selection with prior beliefs in order to derive a posterior probability for a model of the causal processes that are generating the data in the population of interest. The priors include beliefs about the nature of the non-random sampling procedure. Although exact application of the method would be computationally intractable for most realistic datasets, efficient special-case and approximation methods are discussed. Finally, the paper describes how to combine learning under selection with previous methods for learning from observational and experimental data that are obtained on random samples of the population of interest. The net result is a Bayesian methodology that supports causal modeling and discovery from a rich mixture of different types of data. ",A Bayesian Method for Causal Modeling and Discovery Under Selection " This paper analyzes independence concepts for sets of probability measures associated with directed acyclic graphs. The paper shows that epistemic independence and the standard Markov condition violate desirable separation properties. The adoption of a contraction condition leads to d-separation but still fails to guarantee a belief separation property. To overcome this unsatisfactory situation, a strong Markov condition is proposed, based on epistemic independence. The main result is that the strong Markov condition leads to strong independence and does enforce separation properties; this result implies that (1) separation properties of Bayesian networks do extend to epistemic independence and sets of probability measures, and (2) strong independence has a clear justification based on epistemic independence and the strong Markov condition. ",Separation Properties of Sets of Probability Measures " Algorithms for exact and approximate inference in stochastic logic programs (SLPs) are presented, based respectively, on variable elimination and importance sampling. We then show how SLPs can be used to represent prior distributions for machine learning, using (i) logic programs and (ii) Bayes net structures as examples. Drawing on existing work in statistics, we apply the Metropolis-Hasting algorithm to construct a Markov chain which samples from the posterior distribution. A Prolog implementation for this is described. We also discuss the possibility of constructing explicit representations of the posterior. ","Stochastic Logic Programs: Sampling, Inference and Applications" " We present a new approach for inference in Bayesian networks, which is mainly based on partial differentiation. According to this approach, one compiles a Bayesian network into a multivariate polynomial and then computes the partial derivatives of this polynomial with respect to each variable. We show that once such derivatives are made available, one can compute in constant-time answers to a large class of probabilistic queries, which are central to classical inference, parameter estimation, model validation and sensitivity analysis. We present a number of complexity results relating to the compilation of such polynomials and to the computation of their partial derivatives. We argue that the combined simplicity, comprehensiveness and computational complexity of the presented framework is unique among existing frameworks for inference in Bayesian networks. ",A Differential Approach to Inference in Bayesian Networks " We have recently introduced an any-space algorithm for exact inference in Bayesian networks, called Recursive Conditioning, RC, which allows one to trade space with time at increments of X-bytes, where X is the number of bytes needed to cache a floating point number. In this paper, we present three key extensions of RC. First, we modify the algorithm so it applies to more general factorization of probability distributions, including (but not limited to) Bayesian network factorizations. Second, we present a forgetting mechanism which reduces the space requirements of RC considerably and then compare such requirmenets with those of variable elimination on a number of realistic networks, showing orders of magnitude improvements in certain cases. Third, we present a version of RC for computing maximum a posteriori hypotheses (MAP), which turns out to be the first MAP algorithm allowing a smooth time-space tradeoff. A key advantage of presented MAP algorithm is that it does not have to start from scratch each time a new query is presented, but can reuse some of its computations across multiple queries, leading to significant savings in ceratain cases. ",Any-Space Probabilistic Inference " In this paper, we use evidence-specific value abstraction for speeding Bayesian networks inference. This is done by grouping variable values and treating the combined values as a single entity. As we show, such abstractions can exploit regularities in conditional probability distributions and also the specific values of observed variables. To formally justify value abstraction, we define the notion of safe value abstraction and devise inference algorithms that use it to reduce the cost of inference. Our procedure is particularly useful for learning complex networks with many hidden variables. In such cases, repeated likelihood computations are required for EM or other parameter optimization techniques. Since these computations are repeated with respect to the same evidence set, our methods can provide significant speedup to the learning procedure. We demonstrate the algorithm on genetic linkage problems where the use of value abstraction sometimes differentiates between a feasible and non-feasible solution. ",Likelihood Computations Using Value Abstractions " In this paper, we formulate a qualitative ""linear"" utility theory for lotteries in which uncertainty is expressed qualitatively using a Spohnian disbelief function. We argue that a rational decision maker facing an uncertain decision problem in which the uncertainty is expressed qualitatively should behave so as to maximize ""qualitative expected utility."" Our axiomatization of the qualitative utility is similar to the axiomatization developed by von Neumann and Morgenstern for probabilistic lotteries. We compare our results with other recent results in qualitative decision making. ","A Qualitative Linear Utility Theory for Spohn's Theory of Epistemic Beliefs" " Many intelligent user interfaces employ application and user models to determine the user's preferences, goals and likely future actions. Such models require application analysis, adaptation and expansion. Building and maintaining such models adds a substantial amount of time and labour to the application development cycle. We present a system that observes the interface of an unmodified application and records users' interactions with the application. From a history of such observations we build a coarse state space of observed interface states and actions between them. To refine the space, we hypothesize sub-states based upon the histories that led users to a given state. We evaluate the information gain of possible state splits, varying the length of the histories considered in such splits. In this way, we automatically produce a stochastic dynamic model of the application and of how it is used. To evaluate our approach, we present models derived from real-world application usage data. ",Building a Stochastic Dynamic Model of Application Use " We give an interpretation of the Maximum Entropy (MaxEnt) Principle in game-theoretic terms. Based on this interpretation, we make a formal distinction between different ways of {em applying/} Maximum Entropy distributions. MaxEnt has frequently been criticized on the grounds that it leads to highly representation dependent results. Our distinction allows us to avoid this problem in many cases. ",Maximum Entropy and the Glasses You Are Looking Through " We document a connection between constraint reasoning and probabilistic reasoning. We present an algorithm, called {em probabilistic arc consistency}, which is both a generalization of a well known algorithm for arc consistency used in constraint reasoning, and a specialization of the belief updating algorithm for singly-connected networks. Our algorithm is exact for singly- connected constraint problems, but can work well as an approximation for arbitrary problems. We briefly discuss some empirical results, and related methods. ","Probabilistic Arc Consistency: A Connection between Constraint Reasoning and Probabilistic Reasoning" " Composition of low-dimensional distributions, whose foundations were laid in the papaer published in the Proceeding of UAI'97 (Jirousek 1997), appeared to be an alternative apparatus to describe multidimensional probabilistic models. In contrast to Graphical Markov Models, which define multidomensinoal distributions in a declarative way, this approach is rather procedural. Ordering of low-dimensional distributions into a proper sequence fully defines the resepctive computational procedure; therefore, a stury of different type of generating sequences is one fo the central problems in this field. Thus, it appears that an important role is played by special sequences that are called perfect. Their main characterization theorems are presetned in this paper. However, the main result of this paper is a solution to the problem of margnialization for general sequences. The main theorem describes a way to obtain a generating sequence that defines the model corresponding to the marginal of the distribution defined by an arbitrary genearting sequence. From this theorem the reader can see to what extent these comutations are local; i.e., the sequence consists of marginal distributions whose computation must be made by summing up over the values of the variable eliminated (the paper deals with finite model). ",Marginalization in Composed Probabilistic Models " To investigate the robustness of the output probabilities of a Bayesian network, a sensitivity analysis can be performed. A one-way sensitivity analysis establishes, for each of the probability parameters of a network, a function expressing a posterior marginal probability of interest in terms of the parameter. Current methods for computing the coefficients in such a function rely on a large number of network evaluations. In this paper, we present a method that requires just a single outward propagation in a junction tree for establishing the coefficients in the functions for all possible parameters; in addition, an inward propagation is required for processing evidence. Conversely, the method requires a single outward propagation for computing the coefficients in the functions expressing all possible posterior marginals in terms of a single parameter. We extend these results to an n-way sensitivity analysis in which sets of parameters are studied. ",Making Sensitivity Analysis Computationally Efficient " Many large MDPs can be represented compactly using a dynamic Bayesian network. Although the structure of the value function does not retain the structure of the process, recent work has shown that value functions in factored MDPs can often be approximated well using a decomposed value function: a linear combination of restricted basis functions, each of which refers only to a small subset of variables. An approximate value function for a particular policy can be computed using approximate dynamic programming, but this approach (and others) can only produce an approximation relative to a distance metric which is weighted by the stationary distribution of the current policy. This type of weighted projection is ill-suited to policy improvement. We present a new approach to value determination, that uses a simple closed-form computation to directly compute a least-squares decomposed approximation to the value function for any weights. We then use this value determination algorithm as a subroutine in a policy iteration process. We show that, under reasonable restrictions, the policies induced by a factored value function are compactly represented, and can be manipulated efficiently in a policy iteration process. We also present a method for computing error bounds for decomposed value functions using a variable-elimination algorithm for function optimization. The complexity of all of our algorithms depends on the factorization of system dynamics and of the approximate value function. ",Policy Iteration for Factored MDPs " We propose a framework for building graphical causal model that is based on the concept of causal mechanisms. Causal models are intuitive for human users and, more importantly, support the prediction of the effect of manipulation. We describe an implementation of the proposed framework as an interactive model construction module, ImaGeNIe, in SMILE (Structural Modeling, Inference, and Learning Engine) and in GeNIe (SMILE's Windows user interface). ",Causal Mechanism-based Model Construction " We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. We then show that for all general Bayesian networks, the sequential maximum entropy model coincides with the unique joint distribution. Moreover, we apply the new principle of sequential maximum entropy to interval Bayesian networks and more generally to credal networks. We especially show that this application is equivalent to a number of small local entropy maximizations. ",Credal Networks under Maximum Entropy " We propose a formal framework for intelligent systems which can reason about scientific domains, in particular about the carcinogenicity of chemicals, and we study its properties. Our framework is grounded in a philosophy of scientific enquiry and discourse, and uses a model of dialectical argumentation. The formalism enables representation of scientific uncertainty and conflict in a manner suitable for qualitative reasoning about the domain. ",Risk Agoras: Dialectical Argumentation for Scientific Reasoning " Many applications of intelligent systems require reasoning about the mental states of agents in the domain. We may want to reason about an agent's beliefs, including beliefs about other agents; we may also want to reason about an agent's preferences, and how his beliefs and preferences relate to his behavior. We define a probabilistic epistemic logic (PEL) in which belief statements are given a formal semantics, and provide an algorithm for asserting and querying PEL formulas in Bayesian networks. We then show how to reason about an agent's behavior by modeling his decision process as an influence diagram and assuming that he behaves rationally. PEL can then be used for reasoning from an agent's observed actions to conclusions about other aspects of the domain, including unobserved domain variables and the agent's mental states. ",Probabilistic Models for Agents' Beliefs and Decisions " This paper deals with the representation and solution of asymmetric Bayesian decision problems. We present a formal framework, termed asymmetric influence diagrams, that is based on the influence diagram and allows an efficient representation of asymmetric decision problems. As opposed to existing frameworks, the asymmetric influece diagram primarily encodes asymmetry at the qualitative level and it can therefore be read directly from the model. We give an algorithm for solving asymmetric influence diagrams. The algorithm initially decomposes the asymmetric decision problem into a structure of symmetric subproblems organized as a tree. A solution to the decision problem can then be found by propagating from the leaves toward the root using existing evaluation methods to solve the sub-problems. ",Representing and Solving Asymmetric Bayesian Decision Problems " When using Bayesian networks for modelling the behavior of man-made machinery, it usually happens that a large part of the model is deterministic. For such Bayesian networks deterministic part of the model can be represented as a Boolean function, and a central part of belief updating reduces to the task of calculating the number of satisfying configurations in a Boolean function. In this paper we explore how advances in the calculation of Boolean functions can be adopted for belief updating, in particular within the context of troubleshooting. We present experimental results indicating a substantial speed-up compared to traditional junction tree propagation. ","Using ROBDDs for Inference in Bayesian Networks with Troubleshooting as an Example" " We present a new approach to the solution of decision problems formulated as influence diagrams. The approach converts the influence diagram into a simpler structure, the LImited Memory Influence Diagram (LIMID), where only the requisite information for the computation of optimal policies is depicted. Because the requisite information is explicitly represented in the diagram, the evaluation procedure can take advantage of it. In this paper we show how to convert an influence diagram to a LIMID and describe the procedure for finding an optimal strategy. Our approach can yield significant savings of memory and computational time when compared to traditional methods. ",Evaluating Influence Diagrams using LIMIDs " Conversations abound with uncetainties of various kinds. Treating conversation as inference and decision making under uncertainty, we propose a task independent, multimodal architecture for supporting robust continuous spoken dialog called Quartet. We introduce four interdependent levels of analysis, and describe representations, inference procedures, and decision strategies for managing uncertainties within and between the levels. We highlight the approach by reviewing interactions between a user and two spoken dialog systems developed using the Quartet architecture: Prsenter, a prototype system for navigating Microsoft PowerPoint presentations, and the Bayesian Receptionist, a prototype system for dealing with tasks typically handled by front desk receptionists at the Microsoft corporate campus. ",Conversation as Action Under Uncertainty " We consider the problem belief-state monitoring for the purposes of implementing a policy for a partially-observable Markov decision process (POMDP), specifically how one might approximate the belief state. Other schemes for belief-state approximation (e.g., based on minimixing a measures such as KL-diveregence between the true and estimated state) are not necessarily appropriate for POMDPs. Instead we propose a framework for analyzing value-directed approximation schemes, where approximation quality is determined by the expected error in utility rather than by the error in the belief state itself. We propose heuristic methods for finding good projection schemes for belief state estimation - exhibiting anytime characteristics - given a POMDP value fucntion. We also describe several algorithms for constructing bounds on the error in decision quality (expected utility) associated with acting in accordance with a given belief state approximation. ",Value-Directed Belief State Approximation for POMDPs " Techniques for plan recognition under uncertainty require a stochastic model of the plan-generation process. We introduce Probabilistic State-Dependent Grammars (PSDGs) to represent an agent's plan-generation process. The PSDG language model extends probabilistic context-free grammars (PCFGs) by allowing production probabilities to depend on an explicit model of the planning agent's internal and external state. Given a PSDG description of the plan-generation process, we can then use inference algorithms that exploit the particular independence properties of the PSDG language to efficiently answer plan-recognition queries. The combination of the PSDG language model and inference algorithms extends the range of plan-recognition domains for which practical probabilistic inference is possible, as illustrated by applications in traffic monitoring and air combat. ",Probabilistic State-Dependent Grammars for Plan Recognition " Qualitative probabilistic networks have been designed for probabilistic reasoning in a qualitative way. Due to their coarse level of representation detail, qualitative probabilistic networks do not provide for resolving trade-offs and typically yield ambiguous results upon inference. We present an algorithm for computing more insightful results for unresolved trade-offs. The algorithm builds upon the idea of using pivots to zoom in on the trade-offs and identifying the information that would serve to resolve them. ",Pivotal Pruning of Trade-offs in QPNs " This paper describes a domain-specific knowledge acquisition tool for intelligent automated troubleshooters based on Bayesian networks. No Bayesian network knowledge is required to use the tool, and troubleshooting information can be specified as natural and intuitive as possible. Probabilities can be specified in the direction that is most natural to the domain expert. Thus, the knowledge acquisition efficiently removes the traditional knowledge acquisition bottleneck of Bayesian networks. ",A Knowledge Acquisition Tool for Bayesian-Network Troubleshooters " In this paper, we present a heuristic operator which aims at simultaneously optimizing the orientations of all the edges in an intermediate Bayesian network structure during the search process. This is done by alternating between the space of directed acyclic graphs (DAGs) and the space of skeletons. The found orientations of the edges are based on a scoring function rather than on induced conditional independences. This operator can be used as an extension to commonly employed search strategies. It is evaluated in experiments with artificial and real-world data. ",On the Use of Skeletons when Learning in Bayesian Networks " This paper deals with the problem of estimating the probability that one event was a cause of another in a given scenario. Using structural-semantical definitions of the probabilities of necessary or sufficient causation (or both), we show how to optimally bound these quantities from data obtained in experimental and observational studies, making minimal assumptions concerning the data-generating process. In particular, we strengthen the results of Pearl (1999) by weakening the data-generation assumptions and deriving theoretically sharp bounds on the probabilities of causation. These results delineate precisely how empirical data can be used both in settling questions of attribution and in solving attribution-related problems of decision making. ",Probabilities of Causation: Bounds and Identification " Conditional independence and Markov properties are powerful tools allowing expression of multidimensional probability distributions by means of low-dimensional ones. As multidimensional possibilistic models have been studied for several years, the demand for analogous tools in possibility theory seems to be quite natural. This paper is intended to be a promotion of de Cooman's measure-theoretic approcah to possibility theory, as this approach allows us to find analogies to many important results obtained in probabilistic framework. First, we recall semi-graphoid properties of conditional possibilistic independence, parameterized by a continuous t-norm, and find sufficient conditions for a class of Archimedean t-norms to have the graphoid property. Then we introduce Markov properties and factorization of possibility distrubtions (again parameterized by a continuous t-norm) and find the relationships between them. These results are accompanied by a number of conterexamples, which show that the assumptions of specific theorems are substantial. ",Conditional Independence and Markov Properties in Possibility Theory " Algorithms for learning the conditional probabilities of Bayesian networks with hidden variables typically operate within a high-dimensional search space and yield only locally optimal solutions. One way of limiting the search space and avoiding local optima is to impose qualitative constraints that are based on background knowledge concerning the domain. We present a method for integrating formal statements of qualitative constraints into two learning algorithms, APN and EM. In our experiments with synthetic data, this method yielded networks that satisfied the constraints almost perfectly. The accuracy of the learned networks was consistently superior to that of corresponding networks learned without constraints. The exploitation of qualitative constraints therefore appears to be a promising way to increase both the interpretability and the accuracy of learned Bayesian networks with known structure. ","Exploiting Qualitative Knowledge in the Learning of Conditional Probabilities of Bayesian Networks" " Constraints that may be obtained by composition from simpler constraints are present, in some way or another, in almost every constraint program. The decomposition of such constraints is a standard technique for obtaining an adequate propagation algorithm from a combination of propagators designed for simpler constraints. The decomposition approach is appealing in several ways. Firstly because creating a specific propagator for every constraint is clearly infeasible since the number of constraints is infinite. Secondly, because designing a propagation algorithm for complex constraints can be very challenging. Finally, reusing existing propagators allows to reduce the size of code to be developed and maintained. Traditionally, constraint solvers automatically decompose constraints into simpler ones using additional auxiliary variables and propagators, or expect the users to perform such decomposition themselves, eventually leading to the same propagation model. In this paper we explore views, an alternative way to create efficient propagators for such constraints in a modular, simple and correct way, which avoids the introduction of auxiliary variables and propagators. ",View-based propagation of decomposable constraints " Elicitation of probabilities is one of the most laborious tasks in building decision-theoretic models, and one that has so far received only moderate attention in decision-theoretic systems. We propose a set of user interface tools for graphical probabilistic models, focusing on two aspects of probability elicitation: (1) navigation through conditional probability tables and (2) interactive graphical assessment of discrete probability distributions. We propose two new graphical views that aid navigation in very large conditional probability tables: the CPTree (Conditional Probability Tree) and the SCPT (shrinkable Conditional Probability Table). Based on what is known about graphical presentation of quantitative data to humans, we offer several useful enhancements to probability wheel and bar graph, including different chart styles and options that can be adapted to user preferences and needs. We present the results of a simple usability study that proves the value of the proposed tools. ","User Interface Tools for Navigation in Conditional Probability Tables and Elicitation of Probabilities in Bayesian Networks" " This is the Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence, which was held on Catalina Island, CA August 14-18 2012. ","Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (2012)" " This is the Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, which was held in Acapulco, Mexico, August 7-10 2003 ","Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence (2003)" " This is the Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, which was held in Seattle, WA, August 2-5 2001 ","Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (2001)" " This is the Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, which was held in Alberta, Canada, August 1-4 2002 ","Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (2002)" " Handwriting is one of the most important means of daily communication. Although the problem of handwriting recognition has been considered for more than 60 years there are still many open issues, especially in the task of unconstrained handwritten sentence recognition. This paper focuses on the automatic system that recognizes continuous English sentence through a mouse-based gestures in real-time based on Artificial Neural Network. The proposed Artificial Neural Network is trained using the traditional backpropagation algorithm for self supervised neural network which provides the system with great learning ability and thus has proven highly successful in training for feed-forward Artificial Neural Network. The designed algorithm is not only capable of translating discrete gesture moves, but also continuous gestures through the mouse. In this paper we are using the efficient neural network approach for recognizing English sentence drawn by mouse. This approach shows an efficient way of extracting the boundary of the English Sentence and specifies the area of the recognition English sentence where it has been drawn in an image and then used Artificial Neural Network to recognize the English sentence. The proposed approach English sentence recognition (ESR) system is designed and tested successfully. Experimental results show that the higher speed and accuracy were examined. ","English Sentence Recognition using Artificial Neural Network through Mouse-based Gestures" " This paper presents a knowledge-based detection of objects approach using the OWL ontology language, the Semantic Web Rule Language, and 3D processing built-ins aiming at combining geometrical analysis of 3D point clouds and specialist's knowledge. Here, we share our experience regarding the creation of 3D semantic facility model out of unorganized 3D point clouds. Thus, a knowledge-based detection approach of objects using the OWL ontology language is presented. This knowledge is used to define SWRL detection rules. In addition, the combination of 3D processing built-ins and topological Built-Ins in SWRL rules allows a more flexible and intelligent detection, and the annotation of objects contained in 3D point clouds. The created WiDOP prototype takes a set of 3D point clouds as input, and produces as output a populated ontology corresponding to an indexed scene visualized within VRML language. The context of the study is the detection of railway objects materialized within the Deutsche Bahn scene such as signals, technical cupboards, electric poles, etc. Thus, the resulting enriched and populated ontology, that contains the annotations of objects in the point clouds, is used to feed a GIS system or an IFC file for architecture purposes. ","Knowledge Base Approach for 3D Objects Detection in Point Clouds Using 3D Processing and Specialists Knowledge" " This paper presents a method to compute automatically topological relations using SWRL rules. The calculation of these rules is based on the definition of a Selective Nef Complexes Nef Polyhedra structure generated from standard Polyhedron. The Selective Nef Complexes is a data model providing a set of binary Boolean operators such as Union, Difference, Intersection and Symmetric difference, and unary operators such as Interior, Closure and Boundary. In this work, these operators are used to compute topological relations between objects defined by the constraints of the 9 Intersection Model (9-IM) from Egenhofer. With the help of these constraints, we defined a procedure to compute the topological relations on Nef polyhedra. These topological relationships are Disjoint, Meets, Contains, Inside, Covers, CoveredBy, Equals and Overlaps, and defined in a top-level ontology with a specific semantic definition on relation such as Transitive, Symmetric, Asymmetric, Functional, Reflexive, and Irreflexive. The results of the computation of topological relationships are stored in an OWL-DL ontology allowing after what to infer on these new relationships between objects. In addition, logic rules based on the Semantic Web Rule Language allows the definition of logic programs that define which topological relationships have to be computed on which kind of objects with specific attributes. For instance, a ""Building"" that overlaps a ""Railway"" is a ""RailStation"". ","From 9-IM Topological Operators to Qualitative Spatial Relations using 3D Selective Nef Complexes and Logic Rules for bodies" " Computer Poker's unique characteristics present a well-suited challenge for research in artificial intelligence. For that reason, and due to the Poker's market increase in popularity in Portugal since 2008, several members of LIACC have researched in this field. Several works were published as papers and master theses and more recently a member of LIACC engaged on a research in this area as a Ph.D. thesis in order to develop a more extensive and in-depth work. This paper describes the existing research in LIACC about Computer Poker, with special emphasis on the completed master's theses and plans for future work. This paper means to present a summary of the lab's work to the research community in order to encourage the exchange of ideas with other labs / individuals. LIACC hopes this will improve research in this area so as to reach the goal of creating an agent that surpasses the best human players. ",Computer Poker Research at LIACC " Medical diagnosis process vary in the degree to which they attempt to deal with different complicating aspects of diagnosis such as relative importance of symptoms, varied symptom pattern and the relation between diseases them selves. Based on decision theory, in the past many mathematical models such as crisp set, probability distribution, fuzzy set, intuitionistic fuzzy set were developed to deal with complicating aspects of diagnosis. But, many such models are failed to include important aspects of the expert decisions. Therefore, an effort has been made to process inconsistencies in data being considered by Pawlak with the introduction of rough set theory. Though rough set has major advantages over the other methods, but it generates too many rules that create many difficulties while taking decisions. Therefore, it is essential to minimize the decision rules. In this paper, we use two processes such as pre process and post process to mine suitable rules and to explore the relationship among the attributes. In pre process we use rough set theory to mine suitable rules, whereas in post process we use formal concept analysis from these suitable rules to explore better knowledge and most important factors affecting the decision making. ","A Framework for Intelligent Medical Diagnosis using Rough Set with Formal Concept Analysis" " The dependency graph is a data architecture that models all the dependencies between the different types of assets in the game. It depicts the dependency-based relationships between the assets of a game. For example, a player must construct an arsenal before he can build weapons. It is vital that the dependency graph of a game is designed logically to ensure a logical sequence of game play. However, a mere logical dependency graph is not sufficient in sustaining the players' enduring interests in a game, which brings the problem of game balancing into picture. The issue of game balancing arises when the players do not feel the chances of winning the game over their AI opponents who are more skillful in the game play. At the current state of research, the architecture of dependency graph is monolithic for the players. The sequence of asset possession is always foreseeable because there is only a single dependency graph. Game balancing is impossible when the assets of AI players are overwhelmingly outnumbering that of human players. This paper proposes a parallel architecture of dependency graph for the AI players and human players. Instead of having a single dependency graph, a parallel architecture is proposed where the dependency graph of AI player is adjustable with that of human player using a support dependency as a game balancing mechanism. This paper exhibits that the parallel dependency graph helps to improve game balancing. ",Developing Parallel Dependency Graph In Improving Game Balancing " The main prospective aim of modern research related to Artificial Intelligence is the creation of technical systems that implement the idea of Strong Intelligence. According our point of view the path to the development of such systems comes through the research in the field related to perceptions. Here we formulate the model of the perception of external world which may be used for the description of perceptual activity of intelligent beings. We consider a number of issues related to the development of the set of patterns which will be used by the intelligent system when interacting with environment. The key idea of the presented perception model is the idea of subjective reality. The principle of the relativity of perceived world is formulated. It is shown that this principle is the immediate consequence of the idea of subjective reality. In this paper we show how the methodology of subjective reality may be used for the creation of different types of Strong AI systems. ",Subjective Reality and Strong Artificial Intelligence " Diagnosis and prediction in some domains, like medical and industrial diagnosis, require a representation that combines uncertainty management and temporal reasoning. Based on the fact that in many cases there are few state changes in the temporal range of interest, we propose a novel representation called Temporal Nodes Bayesian Networks (TNBN). In a TNBN each node represents an event or state change of a variable, and an arc corresponds to a causal-temporal relationship. The temporal intervals can differ in number and size for each temporal node, so this allows multiple granularity. Our approach is contrasted with a dynamic Bayesian network for a simple medical example. An empirical evaluation is presented for a more complex problem, a subsystem of a fossil power plant, in which this approach is used for fault diagnosis and prediction with good results. ",A Temporal Bayesian Network for Diagnosis and Prediction " Possibilistic logic bases and possibilistic graphs are two different frameworks of interest for representing knowledge. The former stratifies the pieces of knowledge (expressed by logical formulas) according to their level of certainty, while the latter exhibits relationships between variables. The two types of representations are semantically equivalent when they lead to the same possibility distribution (which rank-orders the possible interpretations). A possibility distribution can be decomposed using a chain rule which may be based on two different kinds of conditioning which exist in possibility theory (one based on product in a numerical setting, one based on minimum operation in a qualitative setting). These two types of conditioning induce two kinds of possibilistic graphs. In both cases, a translation of these graphs into possibilistic bases is provided. The converse translation from a possibilistic knowledge base into a min-based graph is also described. ",Possibilistic logic bases and possibilistic graphs " Our hypothesis is that by equipping certain agents in a multi-agent system controlling an intelligent building with automated decision support, two important factors will be increased. The first is energy saving in the building. The second is customer value---how the people in the building experience the effects of the actions of the agents. We give evidence for the truth of this hypothesis through experimental findings related to tools for artificial decision making. A number of assumptions related to agent control, through monitoring and delegation of tasks to other kinds of agents, of rooms at a test site are relaxed. Each assumption controls at least one uncertainty that complicates considerably the procedures for selecting actions part of each such agent. We show that in realistic decision situations, room-controlling agents can make bounded rational decisions even under dynamic real-time constraints. This result can be, and has been, generalized to other domains with even harsher time constraints. ",Artificial Decision Making Under Uncertainty in Intelligent Buildings " In many domains it is desirable to assess the preferences of users in a qualitative rather than quantitative way. Such representations of qualitative preference orderings form an importnat component of automated decision tools. We propose a graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is ofetn compact and arguably natural. We describe several search algorithms for dominance testing based on this representation; these algorithms are quite effective, especially in specific network topologies, such as chain-and tree- structured networks, as well as polytrees. ",Reasoning With Conditional Ceteris Paribus Preference Statem " This paper describes a Bayesian method for combining an arbitrary mixture of observational and experimental data in order to learn causal Bayesian networks. Observational data are passively observed. Experimental data, such as that produced by randomized controlled trials, result from the experimenter manipulating one or more variables (typically randomly) and observing the states of other variables. The paper presents a Bayesian method for learning the causal structure and parameters of the underlying causal process that is generating the data, given that (1) the data contains a mixture of observational and experimental case records, and (2) the causal process is modeled as a causal Bayesian network. This learning method was applied using as input various mixtures of experimental and observational data that were generated from the ALARM causal Bayesian network. In these experiments, the absolute and relative quantities of experimental and observational data were varied systematically. For each of these training datasets, the learning method was applied to predict the causal structure and to estimate the causal parameters that exist among randomly selected pairs of nodes in ALARM that are not confounded. The paper reports how these structure predictions and parameter estimates compare with the true causal structures and parameters as given by the ALARM network. ",Causal Discovery from a Mixture of Experimental and Observational Data " Recent work on loglinear models in probabilistic constraint logic programming is applied to first-order probabilistic reasoning. Probabilities are defined directly on the proofs of atomic formulae, and by marginalisation on the atomic formulae themselves. We use Stochastic Logic Programs (SLPs) composed of labelled and unlabelled definite clauses to define the proof probabilities. We have a conservative extension of first-order reasoning, so that, for example, there is a one-one mapping between logical and random variables. We show how, in this framework, Inductive Logic Programming (ILP) can be used to induce the features of a loglinear model from data. We also compare the presented framework with other approaches to first-order probabilistic reasoning. ",Loglinear models for first-order probabilistic reasoning " We present a hybrid constraint-based/Bayesian algorithm for learning causal networks in the presence of sparse data. The algorithm searches the space of equivalence classes of models (essential graphs) using a heuristic based on conventional constraint-based techniques. Each essential graph is then converted into a directed acyclic graph and scored using a Bayesian scoring metric. Two variants of the algorithm are developed and tested using data from randomly generated networks of sizes from 15 to 45 nodes with data sizes ranging from 250 to 2000 records. Both variations are compared to, and found to consistently outperform two variations of greedy search with restarts. ","A Hybrid Anytime Algorithm for the Constructiion of Causal Models From Sparse Data" " Hybrid Probabilistic Programs (HPPs) are logic programs that allow the programmer to explicitly encode his knowledge of the dependencies between events being described in the program. In this paper, we classify HPPs into three classes called HPP_1,HPP_2 and HPP_r,r>= 3. For these classes, we provide three types of results for HPPs. First, we develop algorithms to compute the set of all ground consequences of an HPP. Then we provide algorithms and complexity results for the problems of entailment (""Given an HPP P and a query Q as input, is Q a logical consequence of P?"") and consistency (""Given an HPP P as input, is P consistent?""). Our results provide a fine characterization of when polynomial algorithms exist for the above problems, and when these problems become intractable. ",Hybrid Probabilistic Programs: Algorithms and Complexity " The problem of assessing the value of a candidate is viewed here as a multiple combination problem. On the one hand a candidate can be evaluated according to different criteria, and on the other hand several experts are supposed to assess the value of candidates according to each criterion. Criteria are not equally important, experts are not equally competent or reliable. Moreover levels of satisfaction of criteria, or levels of confidence are only assumed to take their values in qualitative scales which are just linearly ordered. The problem is discussed within two frameworks, the transferable belief model and the qualitative possibility theory. They respectively offer a quantitative and a qualitative setting for handling the problem, providing thus a way to compare the nature of the underlying assumptions. ","Assessing the value of a candidate. Comparing belief function and possibility theories" " This paper investigates a purely qualitative version of Savage's theory for decision making under uncertainty. Until now, most representation theorems for preference over acts rely on a numerical representation of utility and uncertainty where utility and uncertainty are commensurate. Disrupting the tradition, we relax this assumption and introduce a purely ordinal axiom requiring that the Decision Maker (DM) preference between two acts only depends on the relative position of their consequences for each state. Within this qualitative framework, we determine the only possible form of the decision rule and investigate some instances compatible with the transitivity of the strict preference. Finally we propose a mild relaxation of our ordinality axiom, leaving room for a new family of qualitative decision rules compatible with transitivity. ","Qualitative Models for Decision Under Uncertainty without the Commensurability Assumption" " In this paper, we analyze the relationship between probability and Spohn's theory for representation of uncertain beliefs. Using the intuitive idea that the more probable a proposition is, the more believable it is, we study transformations from probability to Sphonian disbelief and vice-versa. The transformations described in this paper are different from those described in the literature. In particular, the former satisfies the principles of ordinal congruence while the latter does not. Such transformations between probability and Spohn's calculi can contribute to (1) a clarification of the semantics of nonprobabilistic degree of uncertain belief, and (2) to a construction of a decision theory for such calculi. In practice, the transformations will allow a meaningful combination of more than one calculus in different stages of using an expert system such as knowledge acquisition, inference, and interpretation of results. ",On Transformations between Probability and Spohnian Disbelief Functions " We present a new abductive, probabilistic theory of plan recognition. This model differs from previous plan recognition theories in being centered around a model of plan execution: most previous methods have been based on plans as formal objects or on rules describing the recognition process. We show that our new model accounts for phenomena omitted from most previous plan recognition theories: notably the cumulative effect of a sequence of observations of partially-ordered, interleaved plans and the effect of context on plan adoption. The model also supports inferences about the evolution of plan execution in situations where another agent intervenes in plan execution. This facility provides support for using plan recognition to build systems that will intelligently assist a user. ",A New Model of Plan Recognition " Classical Decision Theory provides a normative framework for representing and reasoning about complex preferences. Straightforward application of this theory to automate decision making is difficult due to high elicitation cost. In response to this problem, researchers have recently developed a number of qualitative, logic-oriented approaches for representing and reasoning about references. While effectively addressing some expressiveness issues, these logics have not proven powerful enough for building practical automated decision making systems. In this paper we present a hybrid approach to preference elicitation and decision making that is grounded in classical multi-attribute utility theory, but can make effective use of the expressive power of qualitative approaches. Specifically, assuming a partially specified multilinear utility function, we show how comparative statements about classes of decision alternatives can be used to further constrain the utility function and thus identify sup-optimal alternatives. This work demonstrates that quantitative and qualitative approaches can be synergistically integrated to provide effective and flexible decision support. ",A Hybrid Approach to Reasoning with Partially Elicited Preference Models " A conceptual foundation for approximation of belief functions is proposed and investigated. It is based on the requirements of consistency and closeness. An optimal approximation is studied. Unfortunately, the computation of the optimal approximation turns out to be intractable. Hence, various heuristic methods are proposed and experimantally evaluated both in terms of their accuracy and in terms of the speed of computation. These methods are compared to the earlier proposed approximations of belief functions. ",Faithful Approximations of Belief Functions " Markov decisions processes (MDPs) are becoming increasing popular as models of decision theoretic planning. While traditional dynamic programming methods perform well for problems with small state spaces, structured methods are needed for large problems. We propose and examine a value iteration algorithm for MDPs that uses algebraic decision diagrams(ADDs) to represent value functions and policies. An MDP is represented using Bayesian networks and ADDs and dynamic programming is applied directly to these ADDs. We demonstrate our method on large MDPs (up to 63 million states) and show that significant gains can be had when compared to tree-structured representations (with up to a thirty-fold reduction in the number of nodes required to represent optimal value functions). ",SPUDD: Stochastic Planning using Decision Diagrams " We outline a method to estimate the value of computation for a flexible algorithm using empirical data. To determine a reasonable trade-off between cost and value, we build an empirical model of the value obtained through computation, and apply this model to estimate the value of computation for quite different problems. In particular, we investigate this trade-off for the problem of constructing policies for decision problems represented as influence diagrams. We show how two features of our anytime algorithm provide reasonable estimates of the value of computation in this domain. ",Estimating the Value of Computation in Flexible Information Refinement " The paper is a second in a series of two papers evaluating the power of a new scheme that generates search heuristics mechanically. The heuristics are extracted from an approximation scheme called mini-bucket elimination that was recently introduced. The first paper introduced the idea and evaluated it within Branch-and-Bound search. In the current paper the idea is further extended and evaluated within Best-First search. The resulting algorithms are compared on coding and medical diagnosis problems, using varying strength of the mini-bucket heuristics. Our results demonstrate an effective search scheme that permits controlled tradeoff between preprocessing (for heuristic generation) and search. Best-first search is shown to outperform Branch-and-Bound, when supplied with good heuristics, and sufficient memory space. ",Mini-Bucket Heuristics for Improved Search " The clique tree algorithm is the standard method for doing inference in Bayesian networks. It works by manipulating clique potentials - distributions over the variables in a clique. While this approach works well for many networks, it is limited by the need to maintain an exact representation of the clique potentials. This paper presents a new unified approach that combines approximate inference and the clique tree algorithm, thereby circumventing this limitation. Many known approximate inference algorithms can be viewed as instances of this approach. The algorithm essentially does clique tree propagation, using approximate inference to estimate the densities in each clique. In many settings, the computation of the approximate clique potential can be done easily using statistical importance sampling. Iterations are used to gradually improve the quality of the estimation. ","A General Algorithm for Approximate Inference and its Application to Hybrid Bayes Nets" " Most fuzzy systems including fuzzy decision support and fuzzy control systems provide out-puts in the form of fuzzy sets that represent the inferred conclusions. Linguistic interpretation of such outputs often involves the use of linguistic approximation that assigns a linguistic label to a fuzzy set based on the predefined primary terms, linguistic modifiers and linguistic connectives. More generally, linguistic approximation can be formalized in the terms of the re-translation rules that correspond to the translation rules in ex-plicitation (e.g. simple, modifier, composite, quantification and qualification rules) in com-puting with words [Zadeh 1996]. However most existing methods of linguistic approximation use the simple, modifier and composite re-translation rules only. Although these methods can provide a sufficient approximation of simple fuzzy sets the approximation of more complex ones that are typical in many practical applications of fuzzy systems may be less satisfactory. Therefore the question arises why not use in linguistic ap-proximation also other re-translation rules corre-sponding to the translation rules in explicitation to advantage. In particular linguistic quantifica-tion may be desirable in situations where the conclusions interpreted as quantified linguistic propositions can be more informative and natu-ral. This paper presents some aspects of linguis-tic approximation in the context of the re-translation rules and proposes an approach to linguistic approximation with the use of quantifi-cation rules, i.e. quantified linguistic approxima-tion. Two methods of the quantified linguistic approximation are considered with the use of lin-guistic quantifiers based on the concepts of the non-fuzzy and fuzzy cardinalities of fuzzy sets. A number of examples are provided to illustrate the proposed approach. ",On Quantified Linguistic Approximation " There is available an ever-increasing variety of procedures for managing uncertainty. These methods are discussed in the literature of artificial intelligence, as well as in the literature of philosophy of science. Heretofore these methods have been evaluated by intuition, discussion, and the general philosophical method of argument and counterexample. Almost any method of uncertainty management will have the property that in the long run it will deliver numbers approaching the relative frequency of the kinds of events at issue. To find a measure that will provide a meaningful evaluation of these treatments of uncertainty, we must look, not at the long run, but at the short or intermediate run. Our project attempts to develop such a measure in terms of short or intermediate length performance. We represent the effects of practical choices by the outcomes of bets offered to agents characterized by two uncertainty management approaches: the subjective Bayesian approach and the Classical confidence interval approach. Experimental evaluation suggests that the confidence interval approach can outperform the subjective approach in the relatively short run. ",Choosing Among Interpretations of Probability " We consider the problem of finding good finite-horizon policies for POMDPs under the expected reward metric. The policies considered are {em free finite-memory policies with limited memory}; a policy is a mapping from the space of observation-memory pairs to the space of action-memeory pairs (the policy updates the memory as it goes), and the number of possible memory states is a parameter of the input to the policy-finding algorithms. The algorithms considered here are preliminary implementations of three search heuristics: local search, simulated annealing, and genetic algorithms. We compare their outcomes to each other and to the optimal policies for each instance. We compare run times of each policy and of a dynamic programming algorithm for POMDPs developed by Hansen that iteratively improves a finite-state controller --- the previous state of the art for finite memory policies. The value of the best policy can only improve as the amount of memory increases, up to the amount needed for an optimal finite-memory policy. Our most surprising finding is that more memory helps in another way: given more memory than is needed for an optimal policy, the algorithms are more likely to converge to optimal-valued policies. ",My Brain is Full: When More Memory Helps " Solving symmetric Bayesian decision problems is a computationally intensive task to perform regardless of the algorithm used. In this paper we propose a method for improving the efficiency of algorithms for solving Bayesian decision problems. The method is based on the principle of lazy evaluation - a principle recently shown to improve the efficiency of inference in Bayesian networks. The basic idea is to maintain decompositions of potentials and to postpone computations for as long as possible. The efficiency improvements obtained with the lazy evaluation based method is emphasized through examples. Finally, the lazy evaluation based method is compared with the hugin and valuation-based systems architectures for solving symmetric Bayesian decision problems. ",Lazy Evaluation of Symmetric Bayesian Decision Problems " This paper extends previous work with network fragments and situation-specific network construction. We formally define the asymmetry network, an alternative representation for a conditional probability table. We also present an object-oriented representation for partially specified asymmetry networks. We show that the representation is parsimonious. We define an algebra for the elements of the representation that allows us to 'factor' any CPT and to soundly combine the partially specified asymmetry networks. ",Representing and Combining Partially Specified CPTs " Decision-making problems in uncertain or stochastic domains are often formulated as Markov decision processes (MDPs). Policy iteration (PI) is a popular algorithm for searching over policy-space, the size of which is exponential in the number of states. We are interested in bounds on the complexity of PI that do not depend on the value of the discount factor. In this paper we prove the first such non-trivial, worst-case, upper bounds on the number of iterations required by PI to converge to the optimal policy. Our analysis also sheds new light on the manner in which PI progresses through the space of policies. ",On the Complexity of Policy Iteration " We are interested in the problem of planning for factored POMDPs. Building on the recent results of Kearns, Mansour and Ng, we provide a planning algorithm for factored POMDPs that exploits the accuracy-efficiency tradeoff in the belief state simplification introduced by Boyen and Koller. ","Approximate Planning for Factored POMDPs using Belief State Simplification" " Solving partially observable Markov decision processes (POMDPs) is highly intractable in general, at least in part because the optimal policy may be infinitely large. In this paper, we explore the problem of finding the optimal policy from a restricted set of policies, represented as finite state automata of a given size. This problem is also intractable, but we show that the complexity can be greatly reduced when the POMDP and/or policy are further constrained. We demonstrate good empirical results with a branch-and-bound method for finding globally optimal deterministic policies, and a gradient-ascent method for finding locally optimal stochastic policies. ",Solving POMDPs by Searching the Space of Finite Policies " Influence diagrams serve as a powerful tool for modelling symmetric decision problems. When solving an influence diagram we determine a set of strategies for the decisions involved. A strategy for a decision variable is in principle a function over its past. However, some of the past may be irrelevant for the decision, and for computational reasons it is important not to deal with redundant variables in the strategies. We show that current methods (e.g. the ""Decision Bayes-ball"" algorithm by Shachter UAI98) do not determine the relevant past, and we present a complete algorithm. Actually, this paper takes a more general outset: When formulating a decision scenario as an influence diagram, a linear temporal ordering of the decisions variables is required. This constraint ensures that the decision scenario is welldefined. However, the structure of a decision scenario often yields certain decisions conditionally independent, and it is therefore unnecessary to impose a linear temporal ordering on the decisions. In this paper we deal with partial influence diagrams i.e. influence diagrams with only a partial temporal ordering specified. We present a set of conditions which are necessary and sufficient to ensure that a partial influence diagram is welldefined. These conditions are used as a basis for the construction of an algorithm for determining whether or not a partial influence diagram is welldefined. ",Welldefined Decision Scenarios " Graphical models based on conditional independence support concise encodings of the subjective belief of a single agent. A natural question is whether the consensus belief of a group of agents can be represented with equal parsimony. We prove, under relatively mild assumptions, that even if everyone agrees on a common graph topology, no method of combining beliefs can maintain that structure. Even weaker conditions rule out local aggregation within conditional probability tables. On a more positive note, we show that if probabilities are combined with the logarithmic opinion pool (LogOP), then commonly held Markov independencies are maintained. This suggests a straightforward procedure for constructing a consensus Markov network. We describe an algorithm for computing the LogOP with time complexity comparable to that of exact Bayesian inference. ",Graphical Representations of Consensus Belief " In previous work, we pointed out the limitations of standard Bayesian networks as a modeling framework for large, complex domains. We proposed a new, richly structured modeling language, {em Object-oriented Bayesian Netorks}, that we argued would be able to deal with such domains. However, it turns out that OOBNs are not expressive enough to model many interesting aspects of complex domains: the existence of specific named objects, arbitrary relations between objects, and uncertainty over domain structure. These aspects are crucial in real-world domains such as battlefield awareness. In this paper, we present SPOOK, an implemented system that addresses these limitations. SPOOK implements a more expressive language that allows it to represent the battlespace domain naturally and compactly. We present a new inference algorithm that utilizes the model structure in a fundamental way, and show empirically that it achieves orders of magnitude speedup over existing approaches. ","SPOOK: A System for Probabilistic Object-Oriented Knowledge Representation" " Bayesian Networks (BN) provide robust probabilistic methods of reasoning under uncertainty, but despite their formal grounds are strictly based on the notion of conditional dependence, not much attention has been paid so far to their use in dependability analysis. The aim of this paper is to propose BN as a suitable tool for dependability analysis, by challenging the formalism with basic issues arising in dependability tasks. We will discuss how both modeling and analysis issues can be naturally dealt with by BN. Moreover, we will show how some limitations intrinsic to combinatorial dependability methods such as Fault Trees can be overcome using BN. This will be pursued through the study of a real-world example concerning the reliability analysis of a redundant digital Programmable Logic Controller (PLC) with majority voting 2:3 ","Bayesian Networks for Dependability Analysis: an Application to Digital Control Reliability" " Qualitative probabilistic networks have been introduced as qualitative abstractions of Bayesian belief networks. One of the major drawbacks of these qualitative networks is their coarse level of detail, which may lead to unresolved trade-offs during inference. We present an enhanced formalism for qualitative networks with a finer level of detail. An enhanced qualitative probabilistic network differs from a regular qualitative network in that it distinguishes between strong and weak influences. Enhanced qualitative probabilistic networks are purely qualitative in nature, as regular qualitative networks are, yet allow for efficiently resolving trade-offs during inference. ",Enhancing QPNs for Trade-off Resolution " In this article we propose a qualitative (ordinal) counterpart for the Partially Observable Markov Decision Processes model (POMDP) in which the uncertainty, as well as the preferences of the agent, are modeled by possibility distributions. This qualitative counterpart of the POMDP model relies on a possibilistic theory of decision under uncertainty, recently developed. One advantage of such a qualitative framework is its ability to escape from the classical obstacle of stochastic POMDPs, in which even with a finite state space, the obtained belief state space of the POMDP is infinite. Instead, in the possibilistic framework even if exponentially larger than the state space, the belief state space remains finite. ","A Possibilistic Model for Qualitative Sequential Decision Problems under Uncertainty in Partially Observable Environments" " One of the most useful sensitivity analysis techniques of decision analysis is the computation of value of information (or clairvoyance), the difference in value obtained by changing the decisions by which some of the uncertainties are observed. In this paper, some simple but powerful extensions to previous algorithms are introduced which allow an efficient value of information calculation on the rooted cluster tree (or strong junction tree) used to solve the original decision problem. ",Efficient Value of Information Computation " Hidden Markov models (HMMs) and partially observable Markov decision processes (POMDPs) form a useful tool for modeling dynamical systems. They are particularly useful for representing environments such as road networks and office buildings, which are typical for robot navigation and planning. The work presented here is concerned with acquiring such models. We demonstrate how domain-specific information and constraints can be incorporated into the statistical estimation process, greatly improving the learned models in terms of the model quality, the number of iterations required for convergence and robustness to reduction in the amount of available data. We present new initialization heuristics which can be used even when the data suffers from cumulative rotational error, new update rules for the model parameters, as an instance of generalized EM, and a strategy for enforcing complete geometrical consistency in the model. Experimental results demonstrate the effectiveness of our approach for both simulated and real robot data, in traditionally hard-to-learn environments. ",Learning Hidden Markov Models with Geometrical Constraints " We present examples where the use of belief functions provided sound and elegant solutions to real life problems. These are essentially characterized by ?missing' information. The examples deal with 1) discriminant analysis using a learning set where classes are only partially known; 2) an information retrieval systems handling inter-documents relationships; 3) the combination of data from sensors competent on partially overlapping frames; 4) the determination of the number of sources in a multi-sensor environment by studying the inter-sensors contradiction. The purpose of the paper is to report on such applications where the use of belief functions provides a convenient tool to handle ?messy' data problems. ",Practical Uses of Belief Functions " The noisy-or and its generalization noisy-max have been utilized to reduce the complexity of knowledge acquisition. In this paper, we present a new representation of noisy-max that allows for efficient inference in general Bayesian networks. Empirical studies show that our method is capable of computing queries in well-known large medical networks, QMR-DT and CPCS, for which no previous exact inference method has been shown to perform well. ",Multiplicative Factorization of Noisy-Max " Structure and parameters in a Bayesian network uniquely specify the probability distribution of the modeled domain. The locality of both structure and probabilistic information are the great benefits of Bayesian networks and require the modeler to only specify local information. On the other hand this locality of information might prevent the modeler - and even more any other person - from obtaining a general overview of the important relationships within the domain. The goal of the work presented in this paper is to provide an ""alternative"" view on the knowledge encoded in a Bayesian network which might sometimes be very helpful for providing insights into the underlying domain. The basic idea is to calculate a mixture approximation to the probability distribution represented by the Bayesian network. The mixture component densities can be thought of as representing typical scenarios implied by the Bayesian model, providing intuition about the basic relationships. As an additional benefit, performing inference in the approximate model is very simple and intuitive and can provide additional insights. The computational complexity for the calculation of the mixture approximations criticaly depends on the measure which defines the distance between the probability distribution represented by the Bayesian network and the approximate distribution. Both the KL-divergence and the backward KL-divergence lead to inefficient algorithms. Incidentally, the latter is used in recent work on mixtures of mean field solutions to which the work presented here is closely related. We show, however, that using a mean squared error cost function leads to update equations which can be solved using the junction tree algorithm. We conclude that the mean squared error cost function can be used for Bayesian networks in which inference based on the junction tree is tractable. For large networks, however, one may have to rely on mean field approximations. ",Mixture Approximations to Bayesian Networks " In building Bayesian belief networks, the elicitation of all probabilities required can be a major obstacle. We learned the extent of this often-cited observation in the construction of the probabilistic part of a complex influence diagram in the field of cancer treatment. Based upon our negative experiences with existing methods, we designed a new method for probability elicitation from domain experts. The method combines various ideas, among which are the ideas of transcribing probabilities and of using a scale with both numerical and verbal anchors for marking assessments. In the construction of the probabilistic part of our influence diagram, the method proved to allow for the elicitation of many probabilities in little time. ",How to Elicit Many Probabilities " The AGM theory of belief revision has become an important paradigm for investigating rational belief changes. Unfortunately, researchers working in this paradigm have restricted much of their attention to rather simple representations of belief states, namely logically closed sets of propositional sentences. In our opinion, this has resulted in a too abstract categorisation of belief change operations: expansion, revision, or contraction. Occasionally, in the AGM paradigm, also probabilistic belief changes have been considered, and it is widely accepted that the probabilistic version of expansion is conditioning. However, we argue that it may be more correct to view conditioning and expansion as two essentially different kinds of belief change, and that what we call constraining is a better candidate for being considered probabilistic expansion. ","Probabilistic Belief Change: Expansion, Conditioning and Constraining" " It is well-known that the notion of (strong) conditional independence (CI) is too restrictive to capture independencies that only hold in certain contexts. This kind of contextual independency, called context-strong independence (CSI), can be used to facilitate the acquisition, representation, and inference of probabilistic knowledge. In this paper, we suggest the use of contextual weak independence (CWI) in Bayesian networks. It should be emphasized that the notion of CWI is a more general form of contextual independence than CSI. Furthermore, if the contextual strong independence holds for all contexts, then the notion of CSI becomes strong CI. On the other hand, if the weak contextual independence holds for all contexts, then the notion of CWI becomes weak independence (WI) nwhich is a more general noncontextual independency than strong CI. More importantly, complete axiomatizations are studied for both the class of WI and the class of CI and WI together. Finally, the interesting property of WI being a necessary and sufficient condition for ensuring consistency in granular probabilistic networks is shown. ",Contextual Weak Independence in Bayesian Networks " As Bayesian networks are applied to larger and more complex problem domains, search for flexible modeling and more efficient inference methods is an ongoing effort. Multiply sectioned Bayesian networks (MSBNs) extend the HUGIN inference for Bayesian networks into a coherent framework for flexible modeling and distributed inference.Lazy propagation extends the Shafer-Shenoy and HUGIN inference methods with reduced space complexity. We apply the Shafer-Shenoy and lazy propagation to inference in MSBNs. The combination of the MSBN framework and lazy propagation provides a better framework for modeling and inference in very large domains. It retains the modeling flexibility of MSBNs and reduces the runtime space complexity, allowing exact inference in much larger domains given the same computational resources. ","Inference in Multiply Sectioned Bayesian Networks with Extended Shafer-Shenoy and Lazy Propagation" " Recent interests in dynamic decision modeling have led to the development of several representation and inference methods. These methods however, have limited application under time critical conditions where a trade-off between model quality and computational tractability is essential. This paper presents an approach to time-critical dynamic decision modeling. A knowledge representation and modeling method called the time-critical dynamic influence diagram is proposed. The formalism has two forms. The condensed form is used for modeling and model abstraction, while the deployed form which can be converted from the condensed form is used for inference purposes. The proposed approach has the ability to represent space-temporal abstraction within the model. A knowledge-based meta-reasoning approach is proposed for the purpose of selecting the best abstracted model that provide the optimal trade-off between model quality and model tractability. An outline of the knowledge-based model construction algorithm is also provided. ",Time-Critical Dynamic Decision Making " We present a technique for speeding up the convergence of value iteration for partially observable Markov decisions processes (POMDPs). The underlying idea is similar to that behind modified policy iteration for fully observable Markov decision processes (MDPs). The technique can be easily incorporated into any existing POMDP value iteration algorithms. Experiments have been conducted on several test problems with one POMDP value iteration algorithm called incremental pruning. We find that the technique can make incremental pruning run several orders of magnitude faster. ","A Method for Speeding Up Value Iteration in Partially Observable Markov Decision Processes" " The notion of rough set captures indiscernibility of elements in a set. But, in many real life situations, an information system establishes the relation between different universes. This gave the extension of rough set on single universal set to rough set on two universal sets. In this paper, we introduce approximation of classifications and measures of uncertainty basing upon rough set on two universal sets employing the knowledge due to binary relations. ","Approximation of Classification and Measures of Uncertainty in Rough Set on Two Universal Sets" " Possibilistic logic is a well-known graded logic of uncertainty suitable to reason under incomplete information and partially inconsistent knowledge, which is built upon classical first order logic. There exists for Possibilistic logic a proof procedure based on a refutation complete resolution-style calculus. Recently, a syntactical extension of first order Possibilistic logic (called PLFC) dealing with fuzzy constants and fuzzily restricted quantifiers has been proposed. Our aim is to present steps towards both the formalization of PLFC itself and an automated deduction system for it by (i) providing a formal semantics; (ii) defining a sound resolution-style calculus by refutation; and (iii) describing a first-order proof procedure for PLFC clauses based on (ii) and on a novel notion of most general substitution of two literals in a resolution step. In contrast to standard Possibilistic logic semantics, truth-evaluation of formulas with fuzzy constants are many-valued instead of boolean, and consequently an extended notion of possibilistic uncertainty is also needed. ","On the Semantics and Automated Deduction for PLFC, a Logic of Possibilistic Uncertainty and Fuzziness" " Argumentation is a promising model for reasoning with uncertain knowledge. The key concept of acceptability enables to differentiate arguments and counterarguments: The certainty of a proposition can then be evaluated through the most acceptable arguments for that proposition. In this paper, we investigate different complementary points of view: - an acceptability based on the existence of direct counterarguments, - an acceptability based on the existence of defenders. Pursuing previous work on preference-based argumentation principles, we enforce both points of view by taking into account preference orderings for comparing arguments. Our approach is illustrated in the context of reasoning with stratified knowldge bases. ",On the Acceptability of Arguments in Preference-Based Argumentation " This paper addresses the problem of merging uncertain information in the framework of possibilistic logic. It presents several syntactic combination rules to merge possibilistic knowledge bases, provided by different sources, into a new possibilistic knowledge base. These combination rules are first described at the meta-level outside the language of possibilistic logic. Next, an extension of possibilistic logic, where the combination rules are inside the language, is proposed. A proof system in a sequent form, which is sound and complete with respect to the possibilistic logic semantics, is given. ",Merging Uncertain Knowledge Bases in a Possibilistic Logic Framework " There exist two general forms of exact algorithms for updating probabilities in Bayesian Networks. The first approach involves using a structure, usually a clique tree, and performing local message based calculation to extract the belief in each variable. The second general class of algorithm involves the use of non-serial dynamic programming techniques to extract the belief in some desired group of variables. In this paper we present a hybrid algorithm based on the latter approach yet possessing the ability to retrieve the belief in all single variables. The technique is advantageous in that it saves a NP-hard computation step over using one algorithm of each type. Furthermore, this technique re-enforces a conjecture of Jensen and Jensen [JJ94] in that it still requires a single NP-hard step to set up the structure on which inference is performed, as we show by confirming Li and D'Ambrosio's [LD94] conjectured NP-hardness of OFP. ","A Hybrid Algorithm to Compute Marginal and Joint Beliefs in Bayesian Networks and Its Complexity" " Recent research in decision theoretic planning has focussed on making the solution of Markov decision processes (MDPs) more feasible. We develop a family of algorithms for structured reachability analysis of MDPs that are suitable when an initial state (or set of states) is known. Using compact, structured representations of MDPs (e.g., Bayesian networks), our methods, which vary in the tradeoff between complexity and accuracy, produce structured descriptions of (estimated) reachable states that can be used to eliminate variables or variable values from the problem description, reducing the size of the MDP and making it easier to solve. One contribution of our work is the extension of ideas from GRAPHPLAN to deal with the distributed nature of action representations typically embodied within Bayes nets and the problem of correlated action effects. We also demonstrate that our algorithm can be made more complete by using k-ary constraints instead of binary constraints. Another contribution is the illustration of how the compact representation of reachability constraints can be exploited by several existing (exact and approximate) abstraction algorithms for MDPs. ",Structured Reachability Analysis for Markov Decision Processes " The monitoring and control of any dynamic system depends crucially on the ability to reason about its current status and its future trajectory. In the case of a stochastic system, these tasks typically involve the use of a belief state- a probability distribution over the state of the process at a given point in time. Unfortunately, the state spaces of complex processes are very large, making an explicit representation of a belief state intractable. Even in dynamic Bayesian networks (DBNs), where the process itself can be represented compactly, the representation of the belief state is intractable. We investigate the idea of maintaining a compact approximation to the true belief state, and analyze the conditions under which the errors due to the approximations taken over the lifetime of the process do not accumulate to make our answers completely irrelevant. We show that the error in a belief state contracts exponentially as the process evolves. Thus, even with multiple approximations, the error in our process remains bounded indefinitely. We show how the additional structure of a DBN can be used to design our approximation scheme, improving its performance significantly. We demonstrate the applicability of our ideas in the context of a monitoring task, showing that orders of magnitude faster inference can be achieved with only a small degradation in accuracy. ",Tractable Inference for Complex Stochastic Processes " The situation assessment problem is considered, in terms of object, condition, activity, and plan recognition, based on data coming from the real-word {em via} various sensors. It is shown that uncertainty issues are linked both to the models and to the matching algorithm. Three different types of uncertainties are identified, and within each one, the numerical and the symbolic cases are distinguished. The emphasis is then put on purely symbolic uncertainties: it is shown that they can be dealt with within a purely symbolic framework resulting from a transposition of classical numerical estimation tools. ","Dealing with Uncertainty in Situation Assessment: towards a Symbolic Approach" " Given an undirected graph G or hypergraph X model for a given set of variables V, we introduce two marginalization operators for obtaining the undirected graph GA or hypergraph HA associated with a given subset A c V such that the marginal distribution of A factorizes according to GA or HA, respectively. Finally, we illustrate the method by its application to some practical examples. With them we show that hypergraph models allow defining a finer factorization or performing a more precise conditional independence analysis than undirected graph models. ",Marginalizing in Undirected Graph and Hypergraph Models " We investigate the application of classification techniques to utility elicitation. In a decision problem, two sets of parameters must generally be elicited: the probabilities and the utilities. While the prior and conditional probabilities in the model do not change from user to user, the utility models do. Thus it is necessary to elicit a utility model separately for each new user. Elicitation is long and tedious, particularly if the outcome space is large and not decomposable. There are two common approaches to utility function elicitation. The first is to base the determination of the users utility function solely ON elicitation OF qualitative preferences.The second makes assumptions about the form AND decomposability OF the utility function.Here we take a different approach: we attempt TO identify the new USERs utility function based on classification relative to a database of previously collected utility functions. We do this by identifying clusters of utility functions that minimize an appropriate distance measure. Having identified the clusters, we develop a classification scheme that requires many fewer and simpler assessments than full utility elicitation and is more robust than utility elicitation based solely on preferences. We have tested our algorithm on a small database of utility functions in a prenatal diagnosis domain and the results are quite promising. ",Utility Elicitation as a Classification Problem " This paper analyzes irrelevance and independence relations in graphical models associated with convex sets of probability distributions (called Quasi-Bayesian networks). The basic question in Quasi-Bayesian networks is, How can irrelevance/independence relations in Quasi-Bayesian networks be detected, enforced and exploited? This paper addresses these questions through Walley's definitions of irrelevance and independence. Novel algorithms and results are presented for inferences with the so-called natural extensions using fractional linear programming, and the properties of the so-called type-1 extensions are clarified through a new generalization of d-separation. ",Irrelevance and Independence Relations in Quasi-Bayesian Networks " It is well known that one can ignore parts of a belief network when computing answers to certain probabilistic queries. It is also well known that the ignorable parts (if any) depend on the specific query of interest and, therefore, may change as the query changes. Algorithms based on jointrees, however, do not seem to take computational advantage of these facts given that they typically construct jointrees for worst-case queries; that is, queries for which every part of the belief network is considered relevant. To address this limitation, we propose in this paper a method for reconfiguring jointrees dynamically as the query changes. The reconfiguration process aims at maintaining a jointree which corresponds to the underlying belief network after it has been pruned given the current query. Our reconfiguration method is marked by three characteristics: (a) it is based on a non-classical definition of jointrees; (b) it is relatively efficient; and (c) it can reuse some of the computations performed before a jointree is reconfigured. We present preliminary experimental results which demonstrate significant savings over using static jointrees when query changes are considerable. ",Dynamic Jointrees " The variability of structure in a finite Markov equivalence class of causally sufficient models represented by directed acyclic graphs has been fully characterized. Without causal sufficiency, an infinite semi-Markov equivalence class of models has only been characterized by the fact that each model in the equivalence class entails the same marginal statistical dependencies. In this paper, we study the variability of structure of causal models within a semi-Markov equivalence class and propose a systematic approach to construct models entailing any specific marginal statistical dependencies. ",On the Semi-Markov Equivalence of Causal Models " This paper relates comparative belief structures and a general view of belief management in the setting of deductively closed logical representations of accepted beliefs. We show that the range of compatibility between the classical deductive closure and uncertain reasoning covers precisely the nonmonotonic 'preferential' inference system of Kraus, Lehmann and Magidor and nothing else. In terms of uncertain reasoning any possibility or necessity measure gives birth to a structure of accepted beliefs. The classes of probability functions and of Shafer's belief functions which yield belief sets prove to be very special ones. ","Comparative Uncertainty, Belief Functions and Accepted Beliefs" " This paper presents an axiomatic framework for qualitative decision under uncertainty in a finite setting. The corresponding utility is expressed by a sup-min expression, called Sugeno (or fuzzy) integral. Technically speaking, Sugeno integral is a median, which is indeed a qualitative counterpart to the averaging operation underlying expected utility. The axiomatic justification of Sugeno integral-based utility is expressed in terms of preference between acts as in Savage decision theory. Pessimistic and optimistic qualitative utilities, based on necessity and possibility measures, previously introduced by two of the authors, can be retrieved in this setting by adding appropriate axioms. ",Qualitative Decision Theory with Sugeno Integrals " While decision theory provides an appealing normative framework for representing rich preference structures, eliciting utility or value functions typically incurs a large cost. For many applications involving interactive systems this overhead precludes the use of formal decision-theoretic models of preference. Instead of performing elicitation in a vacuum, it would be useful if we could augment directly elicited preferences with some appropriate default information. In this paper we propose a case-based approach to alleviating the preference elicitation bottleneck. Assuming the existence of a population of users from whom we have elicited complete or incomplete preference structures, we propose eliciting the preferences of a new user interactively and incrementally, using the closest existing preference structures as potential defaults. Since a notion of closeness demands a measure of distance among preference structures, this paper takes the first step of studying various distance measures over fully and partially specified preference structures. We explore the use of Euclidean distance, Spearmans footrule, and define a new measure, the probabilistic distance. We provide computational techniques for all three measures. ","Towards Case-Based Preference Elicitation: Similarity Measures on Preference Structures" " Most algorithms for solving POMDPs iteratively improve a value function that implicitly represents a policy and are said to search in value function space. This paper presents an approach to solving POMDPs that represents a policy explicitly as a finite-state controller and iteratively improves the controller by search in policy space. Two related algorithms illustrate this approach. The first is a policy iteration algorithm that can outperform value iteration in solving infinitehorizon POMDPs. It provides the foundation for a new heuristic search algorithm that promises further speedup by focusing computational effort on regions of the problem space that are reachable, or likely to be reached, from a start state. ",Solving POMDPs by Searching in Policy Space " We investigate the use of temporally abstract actions, or macro-actions, in the solution of Markov decision processes. Unlike current models that combine both primitive actions and macro-actions and leave the state space unchanged, we propose a hierarchical model (using an abstract MDP) that works with macro-actions only, and that significantly reduces the size of the state space. This is achieved by treating macroactions as local policies that act in certain regions of state space, and by restricting states in the abstract MDP to those at the boundaries of regions. The abstract MDP approximates the original and can be solved more efficiently. We discuss several ways in which macro-actions can be generated to ensure good solution quality. Finally, we consider ways in which macro-actions can be reused to solve multiple, related MDPs; and we show that this can justify the computational overhead of macro-action generation. ",Hierarchical Solution of Markov Decision Processes using Macro-actions " Stochastic search algorithms are among the most sucessful approaches for solving hard combinatorial problems. A large class of stochastic search approaches can be cast into the framework of Las Vegas Algorithms (LVAs). As the run-time behavior of LVAs is characterized by random variables, the detailed knowledge of run-time distributions provides important information for the analysis of these algorithms. In this paper we propose a novel methodology for evaluating the performance of LVAs, based on the identification of empirical run-time distributions. We exemplify our approach by applying it to Stochastic Local Search (SLS) algorithms for the satisfiability problem (SAT) in propositional logic. We point out pitfalls arising from the use of improper empirical methods and discuss the benefits of the proposed methodology for evaluating and comparing LVAs. ",Evaluating Las Vegas Algorithms - Pitfalls and Remedies " We present an anytime algorithm which computes policies for decision problems represented as multi-stage influence diagrams. Our algorithm constructs policies incrementally, starting from a policy which makes no use of the available information. The incremental process constructs policies which includes more of the information available to the decision maker at each step. While the process converges to the optimal policy, our approach is designed for situations in which computing the optimal policy is infeasible. We provide examples of the process on several large decision problems, showing that, for these examples, the process constructs valuable (but sub-optimal) policies before the optimal policy would be available by traditional methods. ",An Anytime Algorithm for Decision Making under Uncertainty " For many real time applications, it is important to validate the information received from the sensors before entering higher levels of reasoning. This paper presents an any time probabilistic algorithm for validating the information provided by sensors. The system consists of two Bayesian network models. The first one is a model of the dependencies between sensors and it is used to validate each sensor. It provides a list of potentially faulty sensors. To isolate the real faults, a second Bayesian network is used, which relates the potential faults with the real faults. This second model is also used to make the validation algorithm any time, by validating first the sensors that provide more information. To select the next sensor to validate, and measure the quality of the results at each stage, an entropy function is used. This function captures in a single quantity both the certainty and specificity measures of any time algorithms. Together, both models constitute a mechanism for validating sensors in an any time fashion, providing at each step the probability of correct/faulty for each sensor, and the total quality of the results. The algorithm has been tested in the validation of temperature sensors of a power plant. ",Any Time Probabilistic Reasoning for Sensor Validation " We take another look at the general problem of selecting a preferred probability measure among those that comply with some given constraints. The dominant role that entropy maximization has obtained in this context is questioned by arguing that the minimum information principle on which it is based could be supplanted by an at least as plausible ""likelihood of evidence"" principle. We then review a method for turning given selection functions into representation independent variants, and discuss the tradeoffs involved in this transformation. ","Measure Selection: Notions of Rationality and Representation Independence" " In the literature on graphical models, there has been increased attention paid to the problems of learning hidden structure (see Heckerman [H96] for survey) and causal mechanisms from sample data [H96, P88, S93, P95, F98]. In most settings we should expect the former to be difficult, and the latter potentially impossible without experimental intervention. In this work, we examine some restricted settings in which perfectly reconstruct the hidden structure solely on the basis of observed sample data. ","Exact Inference of Hidden Structure from Sample Data in Noisy-OR Networks" " In the last decade, several architectures have been proposed for exact computation of marginals using local computation. In this paper, we compare three architectures - Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer - from the perspective of graphical structure for message propagation, message-passing scheme, computational efficiency, and storage efficiency. ","A Comparison of Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer Architectures for Computing Marginals of Probability Distributions" " Qualitative probabilistic reasoning in a Bayesian network often reveals tradeoffs: relationships that are ambiguous due to competing qualitative influences. We present two techniques that combine qualitative and numeric probabilistic reasoning to resolve such tradeoffs, inferring the qualitative relationship between nodes in a Bayesian network. The first approach incrementally marginalizes nodes that contribute to the ambiguous qualitative relationships. The second approach evaluates approximate Bayesian networks for bounds of probability distributions, and uses these bounds to determinate qualitative relationships in question. This approach is also incremental in that the algorithm refines the state spaces of random variables for tighter bounds until the qualitative relationships are resolved. Both approaches provide systematic methods for tradeoff resolution at potentially lower computational cost than application of purely numeric methods. ",Incremental Tradeoff Resolution in Qualitative Probabilistic Networks " We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well. ",Using Qualitative Relationships for Bounding Probability Distributions " We present locally complete inference rules for probabilistic deduction from taxonomic and probabilistic knowledge-bases over conjunctive events. Crucially, in contrast to similar inference rules in the literature, our inference rules are locally complete for conjunctive events and under additional taxonomic knowledge. We discover that our inference rules are extremely complex and that it is at first glance not clear at all where the deduced tightest bounds come from. Moreover, analyzing the global completeness of our inference rules, we find examples of globally very incomplete probabilistic deductions. More generally, we even show that all systems of inference rules for taxonomic and probabilistic knowledge-bases over conjunctive events are globally incomplete. We conclude that probabilistic deduction by the iterative application of inference rules on interval restrictions for conditional probabilities, even though considered very promising in the literature so far, seems very limited in its field of application. ","Magic Inference Rules for Probabilistic Deduction under Taxonomic Knowledge" " The efficiency of algorithms using secondary structures for probabilistic inference in Bayesian networks can be improved by exploiting independence relations induced by evidence and the direction of the links in the original network. In this paper we present an algorithm that on-line exploits independence relations induced by evidence and the direction of the links in the original network to reduce both time and space costs. Instead of multiplying the conditional probability distributions for the various cliques, we determine on-line which potentials to multiply when a message is to be produced. The performance improvement of the algorithm is emphasized through empirical evaluations involving large real world Bayesian networks, and we compare the method with the HUGIN and Shafer-Shenoy inference algorithms. ",Lazy Propagation in Junction Trees " This paper describes a process for constructing situation-specific belief networks from a knowledge base of network fragments. A situation-specific network is a minimal query complete network constructed from a knowledge base in response to a query for the probability distribution on a set of target variables given evidence and context variables. We present definitions of query completeness and situation-specific networks. We describe conditions on the knowledge base that guarantee query completeness. The relationship of our work to earlier work on KBMC is also discussed. ",Constructing Situation Specific Belief Networks " Several authors have explained that the likelihood ratio measures the strength of the evidence represented by observations in statistical problems. This idea works fine when the goal is to evaluate the strength of the available evidence for a simple hypothesis versus another simple hypothesis. However, the applicability of this idea is limited to simple hypotheses because the likelihood function is primarily defined on points (simple hypotheses) of the parameter space. In this paper we define a general weight of evidence that is applicable to both simple and composite hypotheses. It is based on the Dempster-Shafer concept of plausibility and is shown to be a generalization of the likelihood ratio. Functional models are of a fundamental importance for the general weight of evidence proposed in this paper. The relevant concepts and ideas are explained by means of a familiar urn problem and the general analysis of a real-world medical problem is presented. ",From Likelihood to Plausibility " Distributed knowledge based applications in open domain rely on common sense information which is bound to be uncertain and incomplete. To draw the useful conclusions from ambiguous data, one must address uncertainties and conflicts incurred in a holistic view. No integrated frameworks are viable without an in-depth analysis of conflicts incurred by uncertainties. In this paper, we give such an analysis and based on the result, propose an integrated framework. Our framework extends definite argumentation theory to model uncertainty. It supports three views over conflicting and uncertain knowledge. Thus, knowledge engineers can draw different conclusions depending on the application context (i.e. view). We also give an illustrative example on strategical decision support to show the practical usefulness of our framework. ",Resolving Conflicting Arguments under Uncertainties " This paper presents two new approaches to decomposing and solving large Markov decision problems (MDPs), a partial decoupling method and a complete decoupling method. In these approaches, a large, stochastic decision problem is divided into smaller pieces. The first approach builds a cache of policies for each part of the problem independently, and then combines the pieces in a separate, light-weight step. A second approach also divides the problem into smaller pieces, but information is communicated between the different problem pieces, allowing intelligent decisions to be made about which piece requires the most attention. Both approaches can be used to find optimal policies or approximately optimal policies with provable bounds. These algorithms also provide a framework for the efficient transfer of knowledge across problems that share similar structure. ","Flexible Decomposition Algorithms for Weakly Coupled Markov Decision Problems" " I present a parallel algorithm for exact probabilistic inference in Bayesian networks. For polytree networks with n variables, the worst-case time complexity is O(log n) on a CREW PRAM (concurrent-read, exclusive-write parallel random-access machine) with n processors, for any constant number of evidence variables. For arbitrary networks, the time complexity is O(r^{3w}*log n) for n processors, or O(w*log n) for r^{3w}*n processors, where r is the maximum range of any variable, and w is the induced width (the maximum clique size), after moralizing and triangulating the network. ",Logarithmic Time Parallel Bayesian Inference " The process of diagnosis involves learning about the state of a system from various observations of symptoms or findings about the system. Sophisticated Bayesian (and other) algorithms have been developed to revise and maintain beliefs about the system as observations are made. Nonetheless, diagnostic models have tended to ignore some common sense reasoning exploited by human diagnosticians; In particular, one can learn from which observations have not been made, in the spirit of conversational implicature. There are two concepts that we describe to extract information from the observations not made. First, some symptoms, if present, are more likely to be reported before others. Second, most human diagnosticians and expert systems are economical in their data-gathering, searching first where they are more likely to find symptoms present. Thus, there is a desirable bias toward reporting symptoms that are present. We develop a simple model for these concepts that can significantly improve diagnostic inference. ",Learning From What You Don't Observe " There is evidence that the numbers in probabilistic inference don't really matter. This paper considers the idea that we can make a probabilistic model simpler by making fewer distinctions. Unfortunately, the level of a Bayesian network seems too coarse; it is unlikely that a parent will make little difference for all values of the other parents. In this paper we consider an approximation scheme where distinctions can be ignored in some contexts, but not in other contexts. We elaborate on a notion of a parent context that allows a structured context-specific decomposition of a probability distribution and the associated probabilistic inference scheme called probabilistic partial evaluation (Poole 1997). This paper shows a way to simplify a probabilistic model by ignoring distinctions which have similar probabilities, a method to exploit the simpler model, a bound on the resulting errors, and some preliminary empirical results on simple networks. ",Context-Specific Approximation in Probabilistic Inference " It was recently shown that the problem of decoding messages transmitted through a noisy channel can be formulated as a belief updating task over a probabilistic network [McEliece]. Moreover, it was observed that iterative application of the (linear time) Pearl's belief propagation algorithm designed for polytrees outperformed state of the art decoding algorithms, even though the corresponding networks may have many cycles. This paper demonstrates empirically that an approximation algorithm approx-mpe for solving the most probable explanation (MPE) problem, developed within the recently proposed mini-bucket elimination framework [Dechter96], outperforms iterative belief propagation on classes of coding networks that have bounded induced width. Our experiments suggest that approximate MPE decoders can be good competitors to the approximate belief updating decoders. ","Empirical Evaluation of Approximation Algorithms for Probabilistic Decoding" " This paper describes a decision theoretic formulation of learning the graphical structure of a Bayesian Belief Network from data. This framework subsumes the standard Bayesian approach of choosing the model with the largest posterior probability as the solution of a decision problem with a 0-1 loss function and allows the use of more general loss functions able to trade-off the complexity of the selected model and the error of choosing an oversimplified model. A new class of loss functions, called disintegrable, is introduced, to allow the decision problem to match the decomposability of the graphical model. With this class of loss functions, the optimal solution to the decision problem can be found using an efficient bottom-up search strategy. ",Decision Theoretic Foundations of Graphical Model Selection " One of the benefits of belief networks and influence diagrams is that so much knowledge is captured in the graphical structure. In particular, statements of conditional irrelevance (or independence) can be verified in time linear in the size of the graph. To resolve a particular inference query or decision problem, only some of the possible states and probability distributions must be specified, the ""requisite information."" This paper presents a new, simple, and efficient ""Bayes-ball"" algorithm which is well-suited to both new students of belief networks and state of the art implementations. The Bayes-ball algorithm determines irrelevant sets and requisite information more efficiently than existing methods, and is linear in the size of the graph for belief networks and influence diagrams. ","Bayes-Ball: The Rational Pastime (for Determining Irrelevance and Requisite Information in Belief Networks and Influence Diagrams)" " AThe paper gives a few arguments in favour of the use of chain graphs for description of probabilistic conditional independence structures. Every Bayesian network model can be equivalently introduced by means of a factorization formula with respect to a chain graph which is Markov equivalent to the Bayesian network. A graphical characterization of such graphs is given. The class of equivalent graphs can be represented by a distinguished graph which is called the largest chain graph. The factorization formula with respect to the largest chain graph is a basis of a proposal of how to represent the corresponding (discrete) probability distribution in a computer (i.e. parametrize it). This way does not depend on the choice of a particular Bayesian network from the class of equivalent networks and seems to be the most efficient way from the point of view of memory demands. A separation criterion for reading independency statements from a chain graph is formulated in a simpler way. It resembles the well-known d-separation criterion for Bayesian networks and can be implemented locally. ",Bayesian Networks from the Point of View of Chain Graphs " This paper is about reducing influence diagram (ID) evaluation into Bayesian network (BN) inference problems. Such reduction is interesting because it enables one to readily use one's favorite BN inference algorithm to efficiently evaluate IDs. Two such reduction methods have been proposed previously (Cooper 1988, Shachter and Peot 1992). This paper proposes a new method. The BN inference problems induced by the mew method are much easier to solve than those induced by the two previous methods. ",Probabilistic Inference in Influence Diagrams " There is much interest in using partially observable Markov decision processes (POMDPs) as a formal model for planning in stochastic domains. This paper is concerned with finding optimal policies for POMDPs. We propose several improvements to incremental pruning, presently the most efficient exact algorithm for solving POMDPs. ","Planning with Partially Observable Markov Decision Processes: Advances in Exact Solution Method" " In the real world, insufficient information, limited computation resources, and complex problem structures often force an autonomous agent to make a decision in time less than that required to solve the problem at hand completely. Flexible and approximate computations are two approaches to decision making under limited computation resources. Flexible computation helps an agent to flexibly allocate limited computation resources so that the overall system utility is maximized. Approximate computation enables an agent to find the best satisfactory solution within a deadline. In this paper, we present two state-space reduction methods for flexible and approximate computation: quantitative reduction to deal with inaccurate heuristic information, and structural reduction to handle complex problem structures. These two methods can be applied successively to continuously improve solution quality if more computation is available. Our results show that these reduction methods are effective and efficient, finding better solutions with less computation than some existing well-known methods. ",Flexible and Approximate Computation through State-Space Reduction " Two different definitions of the Artificial Intelligence concept have been proposed in papers [1] and [2]. The first definition is informal. It says that any program that is cleverer than a human being, is acknowledged as Artificial Intelligence. The second definition is formal because it avoids reference to the concept of human being. The readers of papers [1] and [2] might be left with the impression that both definitions are equivalent and the definition in [2] is simply a formal version of that in [1]. This paper will compare both definitions of Artificial Intelligence and, hopefully, will bring a better understanding of the concept. ",Comparison between the two definitions of AI " Class algebra provides a natural framework for sharing of ISA hierarchies between users that may be unaware of each other's definitions. This permits data from relational databases, object-oriented databases, and tagged XML documents to be unioned into one distributed ontology, sharable by all users without the need for prior negotiation or the development of a ""standard"" ontology for each field. Moreover, class algebra produces a functional correspondence between a class's class algebraic definition (i.e. its ""intent"") and the set of all instances which satisfy the expression (i.e. its ""extent""). The framework thus provides assistance in quickly locating examples and counterexamples of various definitions. This kind of information is very valuable when developing models of the real world, and serves as an invaluable tool assisting in the proof of theorems concerning these class algebra expressions. Finally, the relative frequencies of objects in the ISA hierarchy can produce a useful Boolean algebra of probabilities. The probabilities can be used by traditional information-theoretic classification methodologies to obtain optimal ways of classifying objects in the database. ",Class Algebra for Ontology Reasoning " We give an effective procedure that produces a natural number in its output from any natural number in its input, that is, it computes a total function. The elementary operations of the procedure are Turing-computable. The procedure has a second input which can contain the Goedel number of any Turing-computable total function whose range is a subset of the set of the Goedel numbers of all Turing-computable total functions. We prove that the second input cannot be set to the Goedel number of any Turing-computable function that computes the output from any natural number in its first input. In this sense, there is no Turing program that computes the output from its first input. The procedure is used to define creative procedures which compute functions that are not Turing-computable. We argue that creative procedures model an aspect of reasoning that cannot be modeled by Turing machines. ","An Effective Procedure for Computing ""Uncomputable"" Functions" " The paper considers the class of information systems capable of solving heuristic problems on basis of formal theory that was termed modal and vector theory of formal intelligent systems (FIS). The paper justifies the construction of FIS resolution algorithm, defines the main features of these systems and proves theorems that underlie the theory. The principle of representation diversity of FIS construction is formulated. The paper deals with the main principles of constructing and functioning formal intelligent system (FIS) on basis of FIS modal and vector theory. The following phenomena are considered: modular architecture of FIS presentation sub-system, algorithms of data processing at every step of the stage of creating presentations. Besides the paper suggests the structure of neural elements, i.e. zone detectors and processors that are the basis for FIS construction. ",Principles of modal and vector theory of formal intelligence systems " Wide-angle sonar mapping of the environment by mobile robot is nontrivial due to several sources of uncertainty: dropouts due to ""specular"" reflections, obstacle location uncertainty due to the wide beam, and distance measurement error. Earlier papers address the latter problems, but dropouts remain a problem in many environments. We present an approach that lifts the overoptimistic independence assumption used in earlier work, and use Bayes nets to represent the dependencies between objects of the model. Objects of the model consist of readings, and of regions in which ""quasi location invariance"" of the (possible) obstacles exists, with respect to the readings. Simulation supports the method's feasibility. The model is readily extensible to allow for prior distributions, as well as other types of sensing operations. ",Bayes Networks for Sonar Sensor Fusion " A modelling language is described which is suitable for the correlation of information when the underlying functional model of the system is incomplete or uncertain and the temporal dependencies are imprecise. An efficient and incremental implementation is outlined which depends on cost functions satisfying certain criteria. Possibilistic logic and probability theory (as it is used in the applications targetted) satisfy these criteria. ",Exploiting Uncertain and Temporal Information in Correlation " Much recent research in decision theoretic planning has adopted Markov decision processes (MDPs) as the model of choice, and has attempted to make their solution more tractable by exploiting problem structure. One particular algorithm, structured policy construction achieves this by means of a decision theoretic analog of goal regression using action descriptions based on Bayesian networks with tree-structured conditional probability tables. The algorithm as presented is not able to deal with actions with correlated effects. We describe a new decision theoretic regression operator that corrects this weakness. While conceptually straightforward, this extension requires a somewhat more complicated technical approach. ",Correlated Action Effects in Decision Theoretic Regression " Performance prediction or forecasting sporting outcomes involves a great deal of insight into the particular area one is dealing with, and a considerable amount of intuition about the factors that bear on such outcomes and performances. The mathematical Theory of Evidence offers representation formalisms which grant experts a high degree of freedom when expressing their subjective beliefs in the context of decision-making situations like performance prediction. Furthermore, this reasoning framework incorporates a powerful mechanism to systematically pool the decisions made by individual subject matter experts. The idea behind such a combination of knowledge is to improve the competence (quality) of the overall decision-making process. This paper reports on a performance prediction experiment carried out during the European Football Championship in 1996. Relying on the knowledge of four predictors, Evidence Theory was used to forecast the final scores of all 31 matches. The results of this empirical study are very encouraging. ",Corporate Evidential Decision Making in Performance Prediction Domains " Decomposable dependency models and their graphical counterparts, i.e., chordal graphs, possess a number of interesting and useful properties. On the basis of two characterizations of decomposable models in terms of independence relationships, we develop an exact algorithm for recovering the chordal graphical representation of any given decomposable model. We also propose an algorithm for learning chordal approximations of dependency models isomorphic to general undirected graphs. ",Algorithms for Learning Decomposable Models and Chordal Graphs " Most exact algorithms for general partially observable Markov decision processes (POMDPs) use a form of dynamic programming in which a piecewise-linear and convex representation of one value function is transformed into another. We examine variations of the ""incremental pruning"" method for solving this problem and compare them to earlier algorithms from theoretical and empirical perspectives. We find that incremental pruning is presently the most efficient exact method for solving POMDPs. ","Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes" " As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to explanation in the literature - one due to G\""ardenfors and one due to Pearl - and show that both suffer from significant problems. We propose an approach to defining a notion of ""better explanation"" that combines some of the features of both together with more recent work by Pearl and others on causality. ",Defining Explanation in Probabilistic Systems " We present an algorithm for arc reversal in Bayesian networks with tree-structured conditional probability tables, and consider some of its advantages, especially for the simulation of dynamic probabilistic networks. In particular, the method allows one to produce CPTs for nodes involved in the reversal that exploit regularities in the conditional distributions. We argue that this approach alleviates some of the overhead associated with arc reversal, plays an important role in evidence integration and can be used to restrict sampling of variables in DPNs. We also provide an algorithm that detects the dynamic irrelevance of state variables in forward simulation. This algorithm exploits the structured CPTs in a reversed network to determine, in a time-independent fashion, the conditions under which a variable does or does not need to be sampled. ",Structured Arc Reversal and Simulation of Dynamic Probabilistic Networks " Robust Bayesian inference is the calculation of posterior probability bounds given perturbations in a probabilistic model. This paper focuses on perturbations that can be expressed locally in Bayesian networks through convex sets of distributions. Two approaches for combination of local models are considered. The first approach takes the largest set of joint distributions that is compatible with the local sets of distributions; we show how to reduce this type of robust inference to a linear programming problem. The second approach takes the convex hull of joint distributions generated from the local sets of distributions; we demonstrate how to apply interior-point optimization methods to generate posterior bounds and how to generate approximations that are guaranteed to converge to correct posterior bounds. We also discuss calculation of bounds for expected utilities and variances, and global perturbation models. ","Robustness Analysis of Bayesian Networks with Local Convex Sets of Distributions" " This paper proposes a novel, algorithm-independent approach to optimizing belief network inference. rather than designing optimizations on an algorithm by algorithm basis, we argue that one should use an unoptimized algorithm to generate a Q-DAG, a compiled graphical representation of the belief network, and then optimize the Q-DAG and its evaluator instead. We present a set of Q-DAG optimizations that supplant optimizations designed for traditional inference algorithms, including zero compression, network pruning and caching. We show that our Q-DAG optimizations require time linear in the Q-DAG size, and significantly simplify the process of designing algorithms for optimizing belief network inference. ","A Standard Approach for Optimizing Belief Network Inference using Query DAGs" " We present a method for solving implicit (factored) Markov decision processes (MDPs) with very large state spaces. We introduce a property of state space partitions which we call epsilon-homogeneity. Intuitively, an epsilon-homogeneous partition groups together states that behave approximately the same under all or some subset of policies. Borrowing from recent work on model minimization in computer-aided software verification, we present an algorithm that takes a factored representation of an MDP and an 0<=epsilon<=1 and computes a factored epsilon-homogeneous partition of the state space. This partition defines a family of related MDPs - those MDPs with state space equal to the blocks of the partition, and transition probabilities ""approximately"" like those of any (original MDP) state in the source block. To formally study such families of MDPs, we introduce the new notion of a ""bounded parameter MDP"" (BMDP), which is a family of (traditional) MDPs defined by specifying upper and lower bounds on the transition probabilities and rewards. We describe algorithms that operate on BMDPs to find policies that are approximately optimal with respect to the original MDP. In combination, our method for reducing a large implicit MDP to a possibly much smaller BMDP using an epsilon-homogeneous partition, and our methods for selecting actions in BMDPs constitute a new approach for analyzing large implicit MDPs. Among its advantages, this new approach provides insight into existing algorithms to solving implicit MDPs, provides useful connections to work in automata theory and model minimization, and suggests methods, which involve varying epsilon, to trade time and space (specifically in terms of the size of the corresponding state space) for solution quality. ","Model Reduction Techniques for Computing Approximately Optimal Solutions for Markov Decision Processes" " This paper describes a class of probabilistic approximation algorithms based on bucket elimination which offer adjustable levels of accuracy and efficiency. We analyze the approximation for several tasks: finding the most probable explanation, belief updating and finding the maximum a posteriori hypothesis. We identify regions of completeness and provide preliminary empirical evaluation on randomly generated networks. ",A Scheme for Approximating Probabilistic Inference " We present a method for calculation of myopic value of information in influence diagrams (Howard & Matheson, 1981) based on the strong junction tree framework (Jensen, Jensen & Dittmer, 1994). The difference in instantiation order in the influence diagrams is reflected in the corresponding junction trees by the order in which the chance nodes are marginalized. This order of marginalization can be changed by table expansion and in effect the same junction tree with expanded tables may be used for calculating the expected utility for scenarios with different instantiation order. We also compare our method to the classic method of modeling different instantiation orders in the same influence diagram. ",Myopic Value of Information in Influence Diagrams " Poole has shown that nonmonotonic logics do not handle the lottery paradox correctly. In this paper we will show that Pollock's theory of defeasible reasoning fails for the same reason: defeasible reasoning is incompatible with the skeptical notion of derivability. ",Limitations of Skeptical Default Reasoning " This paper investigates the problem of finding a preference relation on a set of acts from the knowledge of an ordering on events (subsets of states of the world) describing the decision-maker (DM)s uncertainty and an ordering of consequences of acts, describing the DMs preferences. However, contrary to classical approaches to decision theory, we try to do it without resorting to any numerical representation of utility nor uncertainty, and without even using any qualitative scale on which both uncertainty and preference could be mapped. It is shown that although many axioms of Savage theory can be preserved and despite the intuitive appeal of the method for constructing a preference over acts, the approach is inconsistent with a probabilistic representation of uncertainty, but leads to the kind of uncertainty theory encountered in non-monotonic reasoning (especially preferential and rational inference), closely related to possibility theory. Moreover the method turns out to be either very little decisive or to lead to very risky decisions, although its basic principles look sound. This paper raises the question of the very possibility of purely symbolic approaches to Savage-like decision-making under uncertainty and obtains preliminary negative results. ",Decision-making Under Ordinal Preferences and Comparative Uncertainty " We examine the computational complexity of testing and finding small plans in probabilistic planning domains with succinct representations. We find that many problems of interest are complete for a variety of complexity classes: NP, co-NP, PP, NP^PP, co-NP^PP, and PSPACE. Of these, the probabilistic classes PP and NP^PP are likely to be of special interest in the field of uncertainty in artificial intelligence and are deserving of additional study. These results suggest a fruitful direction of future algorithmic development. ",The Complexity of Plan Existence and Evaluation in Probabilistic Domains " Stochastic algorithms are among the best for solving computationally hard search and reasoning problems. The runtime of such procedures is characterized by a random variable. Different algorithms give rise to different probability distributions. One can take advantage of such differences by combining several algorithms into a portfolio, and running them in parallel or interleaving them on a single processor. We provide a detailed evaluation of the portfolio approach on distributions of hard combinatorial search problems. We show under what conditions the protfolio approach can have a dramatic computational advantage over the best traditional methods. ",Algorithm Portfolio Design: Theory vs. Practice " Conditioning is the generally agreed-upon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of cross-entropy minimization, to handle updates that involve uncertain information. In this paper we re-examine such a case: van Fraassen's Judy Benjamin problem, which in essence asks how one might update given the value of a conditional probability. We argue that -- contrary to the suggestions in the literature -- it is possible to use simple conditionalization in this case, and thereby obtain answers that agree fully with intuition. This contrasts with proposals such as cross-entropy, which are easier to apply but can give unsatisfactory answers. Based on the lessons from this example, we speculate on some general philosophical issues concerning probability update. ",Probability Update: Conditioning vs. Cross-Entropy " Valuation based systems verifying an idempotent property are studied. A partial order is defined between the valuations giving them a lattice structure. Then, two different strategies are introduced to represent valuations: as infimum of the most informative valuations or as supremum of the least informative ones. It is studied how to carry out computations with both representations in an efficient way. The particular cases of finite sets and convex polytopes are considered. ",Inference with Idempotent Valuations " We review the problem of time-critical action and discuss a reformulation that shifts knowledge acquisition from the assessment of complex temporal probabilistic dependencies to the direct assessment of time-dependent utilities over key outcomes of interest. We dwell on a class of decision problems characterized by the centrality of diagnosing and reacting in a timely manner to pathological processes. We motivate key ideas in the context of trauma-care triage and transportation decisions. ",Time-Critical Reasoning: Representations and Application " A new method is developed to represent probabilistic relations on multiple random events. Where previously knowledge bases containing probabilistic rules were used for this purpose, here a probability distribution over the relations is directly represented by a Bayesian network. By using a powerful way of specifying conditional probability distributions in these networks, the resulting formalism is more expressive than the previous ones. Particularly, it provides for constraints on equalities of events, and it allows to define complex, nested combination functions. ",Relational Bayesian Networks " Decomposable models and Bayesian networks can be defined as sequences of oligo-dimensional probability measures connected with operators of composition. The preliminary results suggest that the probabilistic models allowing for effective computational procedures are represented by sequences possessing a special property; we shall call them perfect sequences. The paper lays down the elementary foundation necessary for further study of iterative application of operators of composition. We believe to develop a technique describing several graph models in a unifying way. We are convinced that practically all theoretical results and procedures connected with decomposable models and Bayesian networks can be translated into the terminology introduced in this paper. For example, complexity of computational procedures in these models is closely dependent on possibility to change the ordering of oligo-dimensional measures defining the model. Therefore, in this paper, lot of attention is paid to possibility to change ordering of the operators of composition. ",Composition of Probability Measures on Finite Spaces " The efficiency of inference in both the Hugin and, most notably, the Shafer-Shenoy architectures can be improved by exploiting the independence relations induced by the incoming messages of a clique. That is, the message to be sent from a clique can be computed via a factorization of the clique potential in the form of a junction tree. In this paper we show that by exploiting such nested junction trees in the computation of messages both space and time costs of the conventional propagation methods may be reduced. The paper presents a structured way of exploiting the nested junction trees technique to achieve such reductions. The usefulness of the method is emphasized through a thorough empirical evaluation involving ten large real-world Bayesian networks and the Hugin inference algorithm. ",Nested Junction Trees " Bayesian networks provide a modeling language and associated inference algorithm for stochastic domains. They have been successfully applied in a variety of medium-scale applications. However, when faced with a large complex domain, the task of modeling using Bayesian networks begins to resemble the task of programming using logical circuits. In this paper, we describe an object-oriented Bayesian network (OOBN) language, which allows complex domains to be described in terms of inter-related objects. We use a Bayesian network fragment to describe the probabilistic relations between the attributes of an object. These attributes can themselves be objects, providing a natural framework for encoding part-of hierarchies. Classes are used to provide a reusable probabilistic model which can be applied to multiple similar objects. Classes also support inheritance of model fragments from a class to a subclass, allowing the common aspects of related classes to be defined only once. Our language has clear declarative semantics: an OOBN can be interpreted as a stochastic functional program, so that it uniquely specifies a probabilistic model. We provide an inference algorithm for OOBNs, and show that much of the structural information encoded by an OOBN--particularly the encapsulation of variables within an object and the reuse of model fragments in different contexts--can also be used to speed up the inference process. ",Object-Oriented Bayesian Networks " We consider probabilistic inference in general hybrid networks, which include continuous and discrete variables in an arbitrary topology. We reexamine the question of variable discretization in a hybrid network aiming at minimizing the information loss induced by the discretization. We show that a nonuniform partition across all variables as opposed to uniform partition of each variable separately reduces the size of the data structures needed to represent a continuous function. We also provide a simple but efficient procedure for nonuniform partition. To represent a nonuniform discretization in the computer memory, we introduce a new data structure, which we call a Binary Split Partition (BSP) tree. We show that BSP trees can be an exponential factor smaller than the data structures in the standard uniform discretization in multiple dimensions and show how the BSP trees can be used in the standard join tree algorithm. We show that the accuracy of the inference process can be significantly improved by adjusting discretization with evidence. We construct an iterative anytime algorithm that gradually improves the quality of the discretization and the accuracy of the answer on a query. We provide empirical evidence that the algorithm converges. ",Nonuniform Dynamic Discretization in Hybrid Networks " The idea of fully accepting statements when the evidence has rendered them probable enough faces a number of difficulties. We leave the interpretation of probability largely open, but attempt to suggest a contextual approach to full belief. We show that the difficulties of probabilistic acceptance are not as severe as they are sometimes painted, and that though there are oddities associated with probabilistic acceptance they are in some instances less awkward than the difficulties associated with other nonmonotonic formalisms. We show that the structure at which we arrive provides a natural home for statistical inference. ",Probabilistic Acceptance " In most current applications of belief networks, domain knowledge is represented by a single belief network that applies to all problem instances in the domain. In more complex domains, problem-specific models must be constructed from a knowledge base encoding probabilistic relationships in the domain. Most work in knowledge-based model construction takes the rule as the basic unit of knowledge. We present a knowledge representation framework that permits the knowledge base designer to specify knowledge in larger semantically meaningful units which we call network fragments. Our framework provides for representation of asymmetric independence and canonical intercausal interaction. We discuss the combination of network fragments to form problem-specific models to reason about particular problem instances. The framework is illustrated using examples from the domain of military situation awareness. ","Network Fragments: Representing Knowledge for Constructing Probabilistic Models" " This paper introduces a computational framework for reasoning in Bayesian belief networks that derives significant advantages from focused inference and relevance reasoning. This framework is based on d -separation and other simple and computationally efficient techniques for pruning irrelevant parts of a network. Our main contribution is a technique that we call relevance-based decomposition. Relevance-based decomposition approaches belief updating in large networks by focusing on their parts and decomposing them into partially overlapping subnetworks. This makes reasoning in some intractable networks possible and, in addition, often results in significant speedup, as the total time taken to update all subnetworks is in practice often considerably less than the time taken to update the network as a whole. We report results of empirical tests that demonstrate practical significance of our approach. ","Computational Advantages of Relevance Reasoning in Bayesian Belief Networks" " A submarine's sonar team is responsible for detecting, localising and classifying targets using information provided by the platform's sensor suite. The information used to make these assessments is typically uncertain and/or incomplete and is likely to require a measure of confidence in its reliability. Moreover, improvements in sensor and communication technology are resulting in increased amounts of on-platform and off-platform information available for evaluation. This proliferation of imprecise information increases the risk of overwhelming the operator. To assist the task of localisation and classification a concept demonstration decision aid (Horizon), based on evidential reasoning, has been developed. Horizon is an information fusion software package for representing and fusing imprecise information about the state of the world, expressed across suitable frames of reference. The Horizon software is currently at prototype stage. ",A Target Classification Decision Aid " By discussing several examples, the theory of generalized functional models is shown to be very natural for modeling some situations of reasoning under uncertainty. A generalized functional model is a pair (f, P) where f is a function describing the interactions between a parameter variable, an observation variable and a random source, and P is a probability distribution for the random source. Unlike traditional functional models, generalized functional models do not require that there is only one value of the parameter variable that is compatible with an observation and a realization of the random source. As a consequence, the results of the analysis of a generalized functional model are not expressed in terms of probability distributions but rather by support and plausibility functions. The analysis of a generalized functional model is very logical and is inspired from ideas already put forward by R.A. Fisher in his theory of fiducial probability. ",Support and Plausibility Degrees in Generalized Functional Models " There is a brief description of the probabilistic causal graph model for representing, reasoning with, and learning causal structure using Bayesian networks. It is then argued that this model is closely related to how humans reason with and learn causal structure. It is shown that studies in psychology on discounting (reasoning concerning how the presence of one cause of an effect makes another cause less probable) support the hypothesis that humans reach the same judgments as algorithms for doing inference in Bayesian networks. Next, it is shown how studies by Piaget indicate that humans learn causal structure by observing the same independencies and dependencies as those used by certain algorithms for learning the structure of a Bayesian network. Based on this indication, a subjective definition of causality is forwarded. Finally, methods for further testing the accuracy of these claims are discussed. ",The Cognitive Processing of Causal Knowledge " Bayesian knowledge bases (BKBs) are a generalization of Bayes networks and weighted proof graphs (WAODAGs), that allow cycles in the causal graph. Reasoning in BKBs requires finding the most probable inferences consistent with the evidence. The cost-sharing heuristic for finding least-cost explanations in WAODAGs was presented and shown to be effective by Charniak and Husain. However, the cycles in BKBs would make the definition of cost-sharing cyclic as well, if applied directly to BKBs. By treating the defining equations of cost-sharing as a system of equations, one can properly define an admissible cost-sharing heuristic for BKBs. Empirical evaluation shows that cost-sharing improves performance significantly when applied to BKBs. ",Cost-Sharing in Bayesian Knowledge Bases " Default logic encounters some conceptual difficulties in representing common sense reasoning tasks. We argue that we should not try to formulate modular default rules that are presumed to work in all or most circumstances. We need to take into account the importance of the context which is continuously evolving during the reasoning process. Sequential thresholding is a quantitative counterpart of default logic which makes explicit the role context plays in the construction of a non-monotonic extension. We present a semantic characterization of generic non-monotonic reasoning, as well as the instantiations pertaining to default logic and sequential thresholding. This provides a link between the two mechanisms as well as a way to integrate the two that can be beneficial to both. ",Sequential Thresholds: Context Sensitive Default Extensions " A stable joint plan should guarantee the achievement of a designer's goal in a multi-agent environment, while ensuring that deviations from the prescribed plan would be detected. We present a computational framework where stable joint plans can be studied, as well as several basic results about the representation, verification and synthesis of stable joint plans. ",On Stable Multi-Agent Behavior in Face of Uncertainty " This paper is concerned with planning in stochastic domains by means of partially observable Markov decision processes (POMDPs). POMDPs are difficult to solve. This paper identifies a subclass of POMDPs called region observable POMDPs, which are easier to solve and can be used to approximate general POMDPs to arbitrary accuracy. ",Region-Based Approximations for Planning in Stochastic Domains " This paper explores the role of independence of causal influence (ICI) in Bayesian network inference. ICI allows one to factorize a conditional probability table into smaller pieces. We describe a method for exploiting the factorization in clique tree propagation (CTP) - the state-of-the-art exact inference algorithm for Bayesian networks. We also present empirical results showing that the resulting algorithm is significantly more efficient than the combination of CTP and previous techniques for exploiting ICI. ",Independence of Causal Influence and Clique Tree Propagation " Planning problems where effects of actions are non-deterministic can be modeled as Markov decision processes. Planning problems are usually goal-directed. This paper proposes several techniques for exploiting the goal-directedness to accelerate value iteration, a standard algorithm for solving Markov decision processes. Empirical studies have shown that the techniques can bring about significant speedups. ",Fast Value Iteration for Goal-Directed Markov Decision Processes " We analyse the complexity of environments according to the policies that need to be used to achieve high performance. The performance results for a population of policies leads to a distribution that is examined in terms of policy complexity and analysed through several diagrams and indicators. The notion of environment response curve is also introduced, by inverting the performance results into an ability scale. We apply all these concepts, diagrams and indicators to a minimalistic environment class, agent-populated elementary cellular automata, showing how the difficulty, discriminating power and ranges (previous to normalisation) may vary for several environments. ",Complexity distribution of agent policies " The best currently known interactive debugging systems rely upon some meta-information in terms of fault probabilities in order to improve their efficiency. However, misleading meta information might result in a dramatic decrease of the performance and its assessment is only possible a-posteriori. Consequently, as long as the actual fault is unknown, there is always some risk of suboptimal interactions. In this work we present a reinforcement learning strategy that continuously adapts its behavior depending on the performance achieved and minimizes the risk of using low-quality meta information. Therefore, this method is suitable for application scenarios where reliable prior fault estimates are difficult to obtain. Using diverse real-world knowledge bases, we show that the proposed interactive query strategy is scalable, features decent reaction time, and outperforms both entropy-based and no-risk strategies on average w.r.t. required amount of user interaction. ",RIO: Minimizing User Interaction in Debugging of Knowledge Bases " The criterion commonly used in directed acyclic graphs (dags) for testing graphical independence is the well-known d-separation criterion. It allows us to build graphical representations of dependency models (usually probabilistic dependency models) in the form of belief networks, which make easy interpretation and management of independence relationships possible, without reference to numerical parameters (conditional probabilities). In this paper, we study the following combinatorial problem: finding the minimum d-separating set for two nodes in a dag. This set would represent the minimum information (in the sense of minimum number of variables) necessary to prevent these two nodes from influencing each other. The solution to this basic problem and some of its extensions can be useful in several ways, as we shall see later. Our solution is based on a two-step process: first, we reduce the original problem to the simpler one of finding a minimum separating set in an undirected graph, and second, we develop an algorithm for solving it. ",An Algorithm for Finding Minimum d-Separating Sets in Belief Networks " This paper works through the optimization of a real world planning problem, with a combination of a generative planning tool and an influence diagram solver. The problem is taken from an existing application in the domain of oil spill emergency response. The planning agent manages constraints that order sets of feasible equipment employment actions. This is mapped at an intermediate level of abstraction onto an influence diagram. In addition, the planner can apply a surveillance operator that determines observability of the state---the unknown trajectory of the oil. The uncertain world state and the objective function properties are part of the influence diagram structure, but not represented in the planning agent domain. By exploiting this structure under the constraints generated by the planning agent, the influence diagram solution complexity simplifies considerably, and an optimum solution to the employment problem based on the objective function is found. Finding this optimum is equivalent to the simultaneous evaluation of a range of plans. This result is an example of bounded optimality, within the limitations of this hybrid generative planner and influence diagram architecture. ","Constraining Influence Diagram Structure by Generative Planning: An Application to the Optimization of Oil Spill Response" " We extend Gaussian networks - directed acyclic graphs that encode probabilistic relationships between variables - to its vector form. Vector Gaussian continuous networks consist of composite nodes representing multivariates, that take continuous values. These vector or composite nodes can represent correlations between parents, as opposed to conventional univariate nodes. We derive rules for inference in these networks based on two methods: message propagation and topology transformation. These two approaches lead to the development of algorithms, that can be implemented in either a centralized or a decentralized manner. The domain of application of these networks are monitoring and estimation problems. This new representation along with the rules for inference developed here can be used to derive current Bayesian algorithms such as the Kalman filter, and provide a rich foundation to develop new algorithms. We illustrate this process by deriving the decentralized form of the Kalman filter. This work unifies concepts from artificial intelligence and modern control theory. ","Inference Using Message Propagation and Topology Transformation in Vector Gaussian Continuous Networks" " We developed the language of Modifiable Temporal Belief Networks (MTBNs) as a structural and temporal extension of Bayesian Belief Networks (BNs) to facilitate normative temporal and causal modeling under uncertainty. In this paper we present definitions of the model, its components, and its fundamental properties. We also discuss how to represent various types of temporal knowledge, with an emphasis on hybrid temporal-explicit time modeling, dynamic structures, avoiding causal temporal inconsistencies, and dealing with models that involve simultaneously actions (decisions) and causal and non-causal associations. We examine the relationships among BNs, Modifiable Belief Networks, and MTBNs with a single temporal granularity, and suggest areas of application suitable to each one of them. ","A Structurally and Temporally Extended Bayesian Belief Network Model: Definitions, Properties, and Modeling Techniques" " Graphical Markov models use graphs, either undirected, directed, or mixed, to represent possible dependences among statistical variables. Applications of undirected graphs (UDGs) include models for spatial dependence and image analysis, while acyclic directed graphs (ADGs), which are especially convenient for statistical analysis, arise in such fields as genetics and psychometrics and as models for expert systems and Bayesian belief networks. Lauritzen, Wermuth and Frydenberg (LWF) introduced a Markov property for chain graphs, which are mixed graphs that can be used to represent simultaneously both causal and associative dependencies and which include both UDGs and ADGs as special cases. In this paper an alternative Markov property (AMP) for chain graphs is introduced, which in some ways is a more direct extension of the ADG Markov property than is the LWF property for chain graph. ",An Alternative Markov Property for Chain Graphs " Approximate models of world state transitions are necessary when building plans for complex systems operating in dynamic environments. External event probabilities can depend on state feature values as well as time spent in that particular state. We assign temporally -dependent probability functions to state transitions. These functions are used to locally compute state probabilities, which are then used to select highly probable goal paths and eliminate improbable states. This probabilistic model has been implemented in the Cooperative Intelligent Real-time Control Architecture (CIRCA), which combines an AI planner with a separate real-time system such that plans are developed, scheduled, and executed with real-time guarantees. We present flight simulation tests that demonstrate how our probabilistic model may improve CIRCA performance. ",Plan Development using Local Probabilistic Models " A nonmonotonic logic of thresholded generalizations is presented. Given propositions A and B from a language L and a positive integer k, the thresholded generalization A=>B{k} means that the conditional probability P(B|A) falls short of one by no more than c*d^k. A two-level probability structure is defined. At the lower level, a model is defined to be a probability function on L. At the upper level, there is a probability distribution over models. A definition is given of what it means for a collection of thresholded generalizations to entail another thresholded generalization. This nonmonotonic entailment relation, called ""entailment in probability"", has the feature that its conclusions are ""probabilistically trustworthy"" meaning that, given true premises, it is improbable that an entailed conclusion would be false. A procedure is presented for ascertaining whether any given collection of premises entails any given conclusion. It is shown that entailment in probability is closely related to Goldszmidt and Pearl's System-Z^+, thereby demonstrating that the conclusions of System-Z^+ are probabilistically trustworthy. ",Entailment in Probability of Thresholded Generalizations " The computational complexity of reasoning within the Dempster-Shafer theory of evidence is one of the main points of criticism this formalism has to face. To overcome this difficulty various approximation algorithms have been suggested that aim at reducing the number of focal elements in the belief functions involved. Besides introducing a new algorithm using this method, this paper describes an empirical study that examines the appropriateness of these approximation procedures in decision making situations. It presents the empirical findings and discusses the various tradeoffs that have to be taken into account when actually applying one of these methods. ","Approximations for Decision Making in the Dempster-Shafer Theory of Evidence" " Possibility theory offers a framework where both Lehmann's ""preferential inference"" and the more productive (but less cautious) ""rational closure inference"" can be represented. However, there are situations where the second inference does not provide expected results either because it cannot produce them, or even provide counter-intuitive conclusions. This state of facts is not due to the principle of selecting a unique ordering of interpretations (which can be encoded by one possibility distribution), but rather to the absence of constraints expressing pieces of knowledge we have implicitly in mind. It is advocated in this paper that constraints induced by independence information can help finding the right ordering of interpretations. In particular, independence constraints can be systematically assumed with respect to formulas composed of literals which do not appear in the conditional knowledge base, or for default rules with respect to situations which are ""normal"" according to the other default rules in the base. The notion of independence which is used can be easily expressed in the qualitative setting of possibility theory. Moreover, when a counter-intuitive plausible conclusion of a set of defaults, is in its rational closure, but not in its preferential closure, it is always possible to repair the set of defaults so as to produce the desired conclusion. ","Coping with the Limitations of Rational Inference in the Framework of Possibility Theory" " We develop a qualitative model of decision making with two aims: to describe how people make simple decisions and to enable computer programs to do the same. Current approaches based on Planning or Decisions Theory either ignore uncertainty and tradeoffs, or provide languages and algorithms that are too complex for this task. The proposed model provides a language based on rules, a semantics based on high probabilities and lexicographical preferences, and a transparent decision procedure where reasons for and against decisions interact. The model is no substitude for Decision Theory, yet for decisions that people find easy to explain it may provide an appealing alternative. ",Arguing for Decisions: A Qualitative Model of Decision Making " Bayesian networks provide a language for qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables. In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation scheme - tree-structured CPTs - for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms. In particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on cutset conditioning. ",Context-Specific Independence in Bayesian Networks " We develop and extend existing decision-theoretic methods for troubleshooting a nonfunctioning device. Traditionally, diagnosis with Bayesian networks has focused on belief updating---determining the probabilities of various faults given current observations. In this paper, we extend this paradigm to include taking actions. In particular, we consider three classes of actions: (1) we can make observations regarding the behavior of a device and infer likely faults as in traditional diagnosis, (2) we can repair a component and then observe the behavior of the device to infer likely faults, and (3) we can change the configuration of the device, observe its new behavior, and infer the likelihood of faults. Analysis of latter two classes of troubleshooting actions requires incorporating notions of persistence into the belief-network formalism used for probabilistic inference. ","Decision-Theoretic Troubleshooting: A Framework for Repair and Experiment" " It is shown that the ability of the interval probability representation to capture epistemological independence is severely limited. Two events are epistemologically independent if knowledge of the first event does not alter belief (i.e., probability bounds) about the second. However, independence in this form can only exist in a 2-monotone probability function in degenerate cases i.e., if the prior bounds are either point probabilities or entirely vacuous. Additional limitations are characterized for other classes of lower probabilities as well. It is argued that these phenomena are simply a matter of interpretation. They appear to be limitations when one interprets probability bounds as a measure of epistemological indeterminacy (i.e., uncertainty arising from a lack of knowledge), but are exactly as one would expect when probability intervals are interpreted as representations of ontological indeterminacy (indeterminacy introduced by structural approximations). The ontological interpretation is introduced and discussed. ",Independence with Lower and Upper Probabilities " Lower and upper probabilities, also known as Choquet capacities, are widely used as a convenient representation for sets of probability distributions. This paper presents a graphical decomposition and exact propagation algorithm for computing marginal posteriors of 2-monotone lower probabilities (equivalently, 2-alternating upper probabilities). ",Propagation of 2-Monotone Lower Probabilities on an Undirected Graph " Quasi-Bayesian theory uses convex sets of probability distributions and expected loss to represent preferences about plans. The theory focuses on decision robustness, i.e., the extent to which plans are affected by deviations in subjective assessments of probability. The present work presents solutions for plan generation when robustness of probability assessments must be included: plans contain information about the robustness of certain actions. The surprising result is that some problems can be solved faster in the Quasi-Bayesian framework than within usual Bayesian theory. We investigate this on the planning to observe problem, i.e., an agent must decide whether to take new observations or not. The fundamental question is: How, and how much, to search for a ""best"" plan, based on the robustness of probability assessments? Plan generation algorithms are derived in the context of material classification with an acoustic robotic probe. A package that constructs Quasi-Bayesian plans is available through anonymous ftp. ","Quasi-Bayesian Strategies for Efficient Plan Generation: Application to the Planning to Observe Problem" " Real-time Decision algorithms are a class of incremental resource-bounded [Horvitz, 89] or anytime [Dean, 93] algorithms for evaluating influence diagrams. We present a test domain for real-time decision algorithms, and the results of experiments with several Real-time Decision Algorithms in this domain. The results demonstrate high performance for two algorithms, a decision-evaluation variant of Incremental Probabilisitic Inference [D'Ambrosio 93] and a variant of an algorithm suggested by Goldszmidt, [Goldszmidt, 95], PK-reduced. We discuss the implications of these experimental results and explore the broader applicability of these algorithms. ",Some Experiments with Real-Time Decision Algorithms " Probabilistic inference algorithms for finding the most probable explanation, the maximum aposteriori hypothesis, and the maximum expected utility and for updating belief are reformulated as an elimination--type algorithm called bucket elimination. This emphasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining conditioning and elimination within this framework. Bounds on complexity are given for all the algorithms as a function of the problem's structure. ","Bucket Elimination: A Unifying Framework for Several Probabilistic Inference" " In this paper we propose a family of algorithms combining tree-clustering with conditioning that trade space for time. Such algorithms are useful for reasoning in probabilistic and deterministic networks as well as for accomplishing optimization tasks. By analyzing the problem structure it will be possible to select from a spectrum the algorithm that best meets a given time-space specification. ",Topological Parameters for Time-Space Tradeoff " This paper provides a formal and practical framework for sound abstraction of probabilistic actions. We start by precisely defining the concept of sound abstraction within the context of finite-horizon planning (where each plan is a finite sequence of actions). Next we show that such abstraction cannot be performed within the traditional probabilistic action representation, which models a world with a single probability distribution over the state space. We then present the constraint mass assignment representation, which models the world with a set of probability distributions and is a generalization of mass assignment representations. Within this framework, we present sound abstraction procedures for three types of action abstraction. We end the paper with discussions and related work on sound and approximate abstraction. We give pointers to papers in which we discuss other sound abstraction-related issues, including applications, estimating loss due to abstraction, and automatically generating abstraction hierarchies. ","Sound Abstraction of Probabilistic Actions in The Constraint Mass Assignment Framework" " This paper discusses belief revision under uncertain inputs in the framework of possibility theory. Revision can be based on two possible definitions of the conditioning operation, one based on min operator which requires a purely ordinal scale only, and another based on product, for which a richer structure is needed, and which is a particular case of Dempster's rule of conditioning. Besides, revision under uncertain inputs can be understood in two different ways depending on whether the input is viewed, or not, as a constraint to enforce. Moreover, it is shown that M.A. Williams' transmutations, originally defined in the setting of Spohn's functions, can be captured in this framework, as well as Boutilier's natural revision. ",Belief Revision with Uncertain Inputs in the Possibilistic Setting " Many algorithms for processing probabilistic networks are dependent on the topological properties of the problem's structure. Such algorithms (e.g., clustering, conditioning) are effective only if the problem has a sparse graph captured by parameters such as tree width and cycle-cut set size. In this paper we initiate a study to determine the potential of structure-based algorithms in real-life applications. We analyze empirically the structural properties of problems coming from the circuit diagnosis domain. Specifically, we locate those properties that capture the effectiveness of clustering and conditioning as well as of a family of conditioning+clustering algorithms designed to gradually trade space for time. We perform our analysis on 11 benchmark circuits widely used in the testing community. We also report on the effect of ordering heuristics on tree-clustering and show that, on our benchmarks, the well-known max-cardinality ordering is substantially inferior to an ordering called min-degree. ","An Evaluation of Structural Parameters for Probabilistic Reasoning: Results on Benchmark Circuits" " The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. Roughly, revision treats a surprising observation as a sign that previous beliefs were wrong, while update treats a surprising observation as an indication that the world has changed. In general, we would expect that an agent making an observation may both want to revise some earlier beliefs and assume that some change has occurred in the world. We define a novel approach to belief change that allows us to do this, by applying ideas from probability theory in a qualitative setting. The key idea is to use a qualitative Markov assumption, which says that state transitions are independent. We show that a recent approach to modeling qualitative uncertainty using plausibility measures allows us to make such a qualitative Markov assumption in a relatively straightforward way, and show how the Markov assumption can be used to provide an attractive belief-change model. ",A Qualitative Markov Assumption and its Implications for Belief Change " Modeling worlds and actions under uncertainty is one of the central problems in the framework of decision-theoretic planning. The representation must be general enough to capture real-world problems but at the same time it must provide a basis upon which theoretical results can be derived. The central notion in the framework we propose here is that of the affine-operator, which serves as a tool for constructing (convex) sets of probability distributions, and which can be considered as a generalization of belief functions and interval mass assignments. Uncertainty in the state of the worlds is modeled with sets of probability distributions, represented by affine-trees while actions are defined as tree-manipulators. A small set of key properties of the affine-operator is presented, forming the basis for most existing operator-based definitions of probabilistic action projection and action abstraction. We derive and prove correct three projection rules, which vividly illustrate the precision-complexity tradeoff in plan projection. Finally, we show how the three types of action abstraction identified by Haddawy and Doan are manifested in the present framework. ",Theoretical Foundations for Abstraction-Based Probabilistic Planning " Recent research has found that diagnostic performance with Bayesian belief networks is often surprisingly insensitive to imprecision in the numerical probabilities. For example, the authors have recently completed an extensive study in which they applied random noise to the numerical probabilities in a set of belief networks for medical diagnosis, subsets of the CPCS network, a subset of the QMR (Quick Medical Reference) focused on liver and bile diseases. The diagnostic performance in terms of the average probabilities assigned to the actual diseases showed small sensitivity even to large amounts of noise. In this paper, we summarize the findings of this study and discuss possible explanations of this low sensitivity. One reason is that the criterion for performance is average probability of the true hypotheses, rather than average error in probability, which is insensitive to symmetric noise distributions. But, we show that even asymmetric, logodds-normal noise has modest effects. A second reason is that the gold-standard posterior probabilities are often near zero or one, and are little disturbed by noise. ","Why Is Diagnosis Using Belief Networks Insensitive to Imprecision In Probabilities?" " We report on work towards flexible algorithms for solving decision problems represented as influence diagrams. An algorithm is given to construct a tree structure for each decision node in an influence diagram. Each tree represents a decision function and is constructed incrementally. The improvements to the tree converge to the optimal decision function (neglecting computational costs) and the asymptotic behaviour is only a constant factor worse than dynamic programming techniques, counting the number of Bayesian network queries. Empirical results show how expected utility increases with the size of the tree and the number of Bayesian net calculations. ",Flexible Policy Construction by Information Refinement " Inference algorithms for arbitrary belief networks are impractical for large, complex belief networks. Inference algorithms for specialized classes of belief networks have been shown to be more efficient. In this paper, we present a search-based algorithm for approximate inference on arbitrary, noisy-OR belief networks, generalizing earlier work on search-based inference for two-level, noisy-OR belief networks. Initial experimental results appear promising. ","Efficient Search-Based Inference for Noisy-OR Belief Networks: TopEpsilon" " The validation of data from sensors has become an important issue in the operation and control of modern industrial plants. One approach is to use knowledge based techniques to detect inconsistencies in measured data. This article presents a probabilistic model for the detection of such inconsistencies. Based on probability propagation, this method is able to find the existence of a possible fault among the set of sensors. That is, if an error exists, many sensors present an apparent fault due to the propagation from the sensor(s) with a real fault. So the fault detection mechanism can only tell if a sensor has a potential fault, but it can not tell if the fault is real or apparent. So the central problem is to develop a theory, and then an algorithm, for distinguishing real and apparent faults, given that one or more sensors can fail at the same time. This article then, presents an approach based on two levels: (i) probabilistic reasoning, to detect a potential fault, and (ii) constraint management, to distinguish the real fault from the apparent ones. The proposed approach is exemplified by applying it to a power plant model. ",A Probabilistic Model For Sensor Validation " We present deterministic techniques for computing upper and lower bounds on marginal probabilities in sigmoid and noisy-OR networks. These techniques become useful when the size of the network (or clique size) precludes exact computations. We illustrate the tightness of the bounds by numerical experiments. ",Computing Upper and Lower Bounds on Likelihoods in Intractable Networks " We present a prototype of a decision support system for management of the fungal disease mildew in winter wheat. The prototype is based on an influence diagram which is used to determine the optimal time and dose of mildew treatments. This involves multiple decision opportunities over time, stochasticity, inaccurate information and incomplete knowledge. The paper describes the practical and theoretical problems encountered during the construction of the influence diagram, and also the experience with the prototype. ",MIDAS - An Influence Diagram for Management of Mildew in Winter Wheat " Although probabilistic inference in a general Bayesian belief network is an NP-hard problem, computation time for inference can be reduced in most practical cases by exploiting domain knowledge and by making approximations in the knowledge representation. In this paper we introduce the property of similarity of states and a new method for approximate knowledge representation and inference which is based on this property. We define two or more states of a node to be similar when the ratio of their probabilities, the likelihood ratio, does not depend on the instantiations of the other nodes in the network. We show that the similarity of states exposes redundancies in the joint probability distribution which can be exploited to reduce the computation time of probabilistic inference in networks with multiple similar states, and that the computational complexity in the networks with exponentially many similar states might be polynomial. We demonstrate our ideas on the example of a BN2O network -- a two layer network often used in diagnostic problems -- by reducing it to a very close network with multiple similar states. We show that the answers to practical queries converge very fast to the answers obtained with the original network. The maximum error is as low as 5% for models that require only 10% of the computation time needed by the original BN2O model. ","Computational Complexity Reduction for BN2O Networks Using Similarity of States" " Uncertainty may be taken to characterize inferences, their conclusions, their premises or all three. Under some treatments of uncertainty, the inferences itself is never characterized by uncertainty. We explore both the significance of uncertainty in the premises and in the conclusion of an argument that involves uncertainty. We argue that for uncertainty to characterize the conclusion of an inference is natural, but that there is an interplay between uncertainty in the premises and uncertainty in the procedure of argument itself. We show that it is possible in principle to incorporate all uncertainty in the premises, rendering uncertainty arguments deductively valid. But we then argue (1) that this does not reflect human argument, (2) that it is computationally costly, and (3) that the gain in simplicity obtained by allowing uncertainty inference can sometimes outweigh the loss of flexibility it entails. ",Uncertain Inferences and Uncertain Conclusions " Like any large system development effort, the construction of a complex belief network model requires systems engineering to manage the design and construction process. We propose a rapid prototyping approach to network engineering. We describe criteria for identifying network modules and the use of ""stubs"" to represent not-yet-constructed modules. We propose an object oriented representation for belief networks which captures the semantics of the problem in addition to conditional independencies and probabilities. Methods for evaluating complex belief network models are discussed. The ideas are illustrated with examples from a large belief network construction problem in the military intelligence domain. ",Network Engineering for Complex Belief Networks " In this paper we propose a framework for combining Disjunctive Logic Programming and Poole's Probabilistic Horn Abduction. We use the concept of hypothesis to specify the probability structure. We consider the case in which probabilistic information is not available. Instead of using probability intervals, we allow for the specification of the probabilities of disjunctions. Because minimal models are used as characteristic models in disjunctive logic programming, we apply the principle of indifference on the set of minimal models to derive default probability values. We define the concepts of explanation and partial explanation of a formula, and use them to determine the default probability distribution(s) induced by a program. An algorithm for calculating the default probability of a goal is presented. ",Probabilistic Disjunctive Logic Programming " A naive (or Idiot) Bayes network is a network with a single hypothesis node and several observations that are conditionally independent given the hypothesis. We recently surveyed a number of members of the UAI community and discovered a general lack of understanding of the implications of the Naive Bayes assumption on the kinds of problems that can be solved by these networks. It has long been recognized [Minsky 61] that if observations are binary, the decision surfaces in these networks are hyperplanes. We extend this result (hyperplane separability) to Naive Bayes networks with m-ary observations. In addition, we illustrate the effect of observation-observation dependencies on decision surfaces. Finally, we discuss the implications of these results on knowledge acquisition and research in learning. ",Geometric Implications of the Naive Bayes Assumption " We show that the d -separation criterion constitutes a valid test for conditional independence relationships that are induced by feedback systems involving discrete variables. ",Identifying Independencies in Causal Graphs with Feedback " We derive qualitative relationships about the informational relevance of variables in graphical decision models based on a consideration of the topology of the models. Specifically, we identify dominance relations for the expected value of information on chance variables in terms of their position and relationships in influence diagrams. The qualitative relationships can be harnessed to generate nonnumerical procedures for ordering uncertain variables in a decision model by their informational relevance. ",A Graph-Theoretic Analysis of Information Value " This paper shows how we can combine logical representations of actions and decision theory in such a manner that seems natural for both. In particular we assume an axiomatization of the domain in terms of situation calculus, using what is essentially Reiter's solution to the frame problem, in terms of the completion of the axioms defining the state change. Uncertainty is handled in terms of the independent choice logic, which allows for independent choices and a logic program that gives the consequences of the choices. As part of the consequences are a specification of the utility of (final) states. The robot adopts robot plans, similar to the GOLOG programming language. Within this logic, we can define the expected utility of a conditional plan, based on the axiomatization of the actions, the uncertainty and the utility. The ?planning' problem is to find the plan with the highest expected utility. This is related to recent structured representations for POMDPs; here we use stochastic situation calculus rules to specify the state transition function and the reward/value function. Finally we show that with stochastic frame axioms, actions representations in probabilistic STRIPS are exponentially larger than using the representation proposed here. ","A Framework for Decision-Theoretic Planning I: Combining the Situation Calculus, Conditional Plans, Probability and Utility" " We present two Monte Carlo sampling algorithms for probabilistic inference that guarantee polynomial-time convergence for a larger class of network than current sampling algorithms provide. These new methods are variants of the known likelihood weighting algorithm. We use of recent advances in the theory of optimal stopping rules for Monte Carlo simulation to obtain an inference approximation with relative error epsilon and a small failure probability delta. We present an empirical evaluation of the algorithms which demonstrates their improved performance. ",Optimal Monte Carlo Estimation of Belief Network Inference " Directed acyclic graphs have been used fruitfully to represent causal strucures (Pearl 1988). However, in the social sciences and elsewhere models are often used which correspond both causally and statistically to directed graphs with directed cycles (Spirtes 1995). Pearl (1993) discussed predicting the effects of intervention in models of this kind, so-called linear non-recursive structural equation models. This raises the question of whether it is possible to make inferences about causal structure with cycles, form sample data. In particular do there exist general, informative, feasible and reliable precedures for inferring causal structure from conditional independence relations among variables in a sample generated by an unknown causal structure? In this paper I present a discovery algorithm that is correct in the large sample limit, given commonly (but often implicitly) made plausible assumptions, and which provides information about the existence or non-existence of causal pathways from one variable to another. The algorithm is polynomial on sparse graphs. ",A Discovery Algorithm for Directed Cyclic Graphs " Although the concept of d-separation was originally defined for directed acyclic graphs (see Pearl 1988), there is a natural extension of he concept to directed cyclic graphs. When exactly the same set of d-separation relations hold in two directed graphs, no matter whether respectively cyclic or acyclic, we say that they are Markov equivalent. In other words, when two directed cyclic graphs are Markov equivalent, the set of distributions that satisfy a natural extension of the Global Directed Markov condition (Lauritzen et al. 1990) is exactly the same for each graph. There is an obvious exponential (in the number of vertices) time algorithm for deciding Markov equivalence of two directed cyclic graphs; simply chech all of the d-separation relations in each graph. In this paper I state a theorem that gives necessary and sufficient conditions for the Markov equivalence of two directed cyclic graphs, where each of the conditions can be checked in polynomial time. Hence, the theorem can be easily adapted into a polynomial time algorithm for deciding the Markov equivalence of two directed cyclic graphs. Although space prohibits inclusion of correctness proofs, they are fully described in Richardson (1994b). ","A Polynomial-Time Algorithm for Deciding Markov Equivalence of Directed Cyclic Graphical Models" " SPIRIT is an expert system shell for probabilistic knowledge bases. Knowledge acquisition is performed by processing facts and rules on discrete variables in a rich syntax. The shell generates a probability distribution which respects all acquired facts and rules and which maximizes entropy. The user-friendly devices of SPIRIT to define variables, formulate rules and create the knowledge base are revealed in detail. Inductive learning is possible. Medium sized applications show the power of the system. ",Coherent Knowledge Processing at Maximum Entropy by SPIRIT " Belief updating in Bayes nets, a well known computationally hard problem, has recently been approximated by several deterministic algorithms, and by various randomized approximation algorithms. Deterministic algorithms usually provide probability bounds, but have an exponential runtime. Some randomized schemes have a polynomial runtime, but provide only probability estimates. We present randomized algorithms that enumerate high-probability partial instantiations, resulting in probability bounds. Some of these algorithms are also sampling algorithms. Specifically, we introduce and evaluate a variant of backward sampling, both as a sampling algorithm and as a randomized enumeration algorithm. We also relax the implicit assumption used by both sampling and accumulation algorithms, that query nodes must be instantiated in all the samples. ",Sample-and-Accumulate Algorithms for Belief Updating in Bayes Networks " We propose a decision-analytical approach to comparing the flexibility of decision situations from the perspective of a decision-maker who exhibits constant risk-aversion over a monetary value model. Our approach is simple yet seems to be consistent with a variety of flexibility concepts, including robust and adaptive alternatives. We try to compensate within the model for uncertainty that was not anticipated or not modeled. This approach not only allows one to compare the flexibility of plans, but also guides the search for new, more flexible alternatives. ",A Measure of Decision Flexibility " The main goal of this paper is to describe a data structure called binary join trees that are useful in computing multiple marginals efficiently using the Shenoy-Shafer architecture. We define binary join trees, describe their utility, and sketch a procedure for constructing them. ",Binary Join Trees " Over the past several years Bayesian networks have been applied to a wide variety of problems. A central problem in applying Bayesian networks is that of finding one or more of the most probable instantiations of a network. In this paper we develop an efficient algorithm that incrementally enumerates the instantiations of a Bayesian network in decreasing order of probability. Such enumeration algorithms are applicable in a variety of applications ranging from medical expert systems to model-based diagnosis. Fundamentally, our algorithm is simply performing a lazy enumeration of the sorted list of all instantiations of the network. This insight leads to a very concise algorithm statement which is both easily understood and implemented. We show that for singly connected networks, our algorithm generates the next instantiation in time polynomial in the size of the network. The algorithm extends to arbitrary Bayesian networks using standard conditioning techniques. We empirically evaluate the enumeration algorithm and demonstrate its practicality. ",Efficient Enumeration of Instantiations in Bayesian Networks " Chain graphs give a natural unifying point of view on Markov and Bayesian networks and enlarge the potential of graphical models for description of conditional independence structures. In the paper a direct graphical separation criterion for chain graphs, called c-separation, which generalizes the d-separation criterion for Bayesian networks is introduced (recalled). It is equivalent to the classic moralization criterion for chain graphs and complete in sense that for every chain graph there exists a probability distribution satisfying exactly conditional independencies derivable from the chain graph by the c-separation criterion. Every class of Markov equivalent chain graphs can be uniquely described by a natural representative, called the largest chain graph. A recovery algorithm, which on basis of the (conditional) dependency model induced by an unknown chain graph finds the corresponding largest chain graph, is presented. ",On Separation Criterion and Recovery Algorithm for Chain Graphs " When we work with information from multiple sources, the formalism each employs to handle uncertainty may not be uniform. In order to be able to combine these knowledge bases of different formats, we need to first establish a common basis for characterizing and evaluating the different formalisms, and provide a semantics for the combined mechanism. A common framework can provide an infrastructure for building an integrated system, and is essential if we are to understand its behavior. We present a unifying framework based on an ordered partition of possible worlds called partition sequences, which corresponds to our intuitive notion of biasing towards certain possible scenarios when we are uncertain of the actual situation. We show that some of the existing formalisms, namely, default logic, autoepistemic logic, probabilistic conditioning and thresholding (generalized conditioning), and possibility theory can be incorporated into this general framework. ","Possible World Partition Sequences: A Unifying Framework for Uncertain Reasoning" " Integrating diagnosis and repair is particularly crucial when gaining sufficient information to discriminate between several candidate diagnoses requires carrying out some repair actions. A typical case is supply restoration in a faulty power distribution system. This problem, which is a major concern for electricity distributors, features partial observability, and stochastic repair actions which are more elaborate than simple replacement of components. This paper analyses the difficulties in applying existing work on integrating model-based diagnosis and repair and on planning in partially observable stochastic domains to this real-world problem, and describes the pragmatic approach we have retained so far. ","Supply Restoration in Power Distribution Systems - A Case Study in Integrating Model-Based Diagnosis and Repair Planning" " For real time evaluation of a Bayesian network when there is not sufficient time to obtain an exact solution, a guaranteed response time, approximate solution is required. It is shown that nontraditional methods utilizing estimators based on an archive of trial solutions and genetic search can provide an approximate solution that is considerably superior to the traditional Monte Carlo simulation methods. ",Real Time Estimation of Bayesian Networks " Axiomatization has been widely used for testing logical implications. This paper suggests a non-axiomatic method, the chase, to test if a new dependency follows from a given set of probabilistic dependencies. Although the chase computation may require exponential time in some cases, this technique is a powerful tool for establishing nontrivial theoretical results. More importantly, this approach provides valuable insight into the intriguing connection between relational databases and probabilistic reasoning systems. ",Testing Implication of Probabilistic Dependencies " We examine a standard factory scheduling problem with stochastic processing and setup times, minimizing the expectation of the weighted number of tardy jobs. Because the costs of operators in the schedule are stochastic and sequence dependent, standard dynamic programming algorithms such as A* may fail to find the optimal schedule. The SDA* (Stochastic Dominance A*) algorithm remedies this difficulty by relaxing the pruning condition. We present an improved state-space search formulation for these problems and discuss the conditions under which stochastic scheduling problems can be solved optimally using SDA*. In empirical testing on randomly generated problems, we found that in 70%, the expected cost of the optimal stochastic solution is lower than that of the solution derived using a deterministic approximation, with comparable search effort. ",Optimal Factory Scheduling using Stochastic Dominance A* " In learning belief networks, the single link lookahead search is widely adopted to reduce the search space. We show that there exists a class of probabilistic domain models which displays a special pattern of dependency. We analyze the behavior of several learning algorithms using different scoring metrics such as the entropy, conditional independence, minimal description length and Bayesian metrics. We demonstrate that single link lookahead search procedures (employed in these algorithms) cannot learn these models correctly. Thus, when the underlying domain model actually belongs to this class, the use of a single link search procedure will result in learning of an incorrect model. This may lead to inference errors when the model is used. Our analysis suggests that if the prior knowledge about a domain does not rule out the possible existence of these models, a multi-link lookahead search or other heuristics should be used for the learning process. ",Critical Remarks on Single Link Search in Learning Belief Networks " We extend the theory of d-separation to cases in which data instances are not independent and identically distributed. We show that applying the rules of d-separation directly to the structure of probabilistic models of relational data inaccurately infers conditional independence. We introduce relational d-separation, a theory for deriving conditional independence facts from relational models. We provide a new representation, the abstract ground graph, that enables a sound, complete, and computationally efficient method for answering d-separation queries about relational models, and we present empirical results that demonstrate effectiveness. ",Reasoning about Independence in Probabilistic Models of Relational Data " Probabilistic independence can dramatically simplify the task of eliciting, representing, and computing with probabilities in large domains. A key technique in achieving these benefits is the idea of graphical modeling. We survey existing notions of independence for utility functions in a multi-attribute space, and suggest that these can be used to achieve similar advantages. Our new results concern conditional additive independence, which we show always has a perfect representation as separation in an undirected graph (a Markov network). Conditional additive independencies entail a particular functional for the utility function that is analogous to a product decomposition of a probability function, and confers analogous benefits. This functional form has been utilized in the Bayesian network and influence diagram literature, but generally without an explanation in terms of independence. The functional form yields a decomposition of the utility function that can greatly speed up expected utility calculations, particularly when the utility graph has a similar topology to the probabilistic network being used. ",Graphical Models for Preference and Utility " Evaluation of counterfactual queries (e.g., ""If A were true, would C have been true?"") is important to fault diagnosis, planning, determination of liability, and policy analysis. We present a method of revaluating counterfactuals when the underlying causal model is represented by structural models - a nonlinear generalization of the simultaneous equations models commonly used in econometrics and social sciences. This new method provides a coherent means for evaluating policies involving the control of variables which, prior to enacting the policy were influenced by other variables in the system. ",Counterfactuals and Policy Analysis in Structural Models " We present a new approach to dealing with default information based on the theory of belief functions. Our semantic structures, inspired by Adams' epsilon-semantics, are epsilon-belief assignments, where values committed to focal elements are either close to 0 or close to 1. We define two systems based on these structures, and relate them to other non-monotonic systems presented in the literature. We show that our second system correctly addresses the well-known problems of specificity, irrelevance, blocking of inheritance, ambiguity, and redundancy. ",Belief Functions and Default Reasoning " We describe an application of belief networks to the diagnosis of bottlenecks in computer systems. The technique relies on a high-level functional model of the interaction between application workloads, the Windows NT operating system, and system hardware. Given a workload description, the model predicts the values of observable system counters available from the Windows NT performance monitoring tool. Uncertainty in workloads, predictions, and counter values are characterized with Gaussian distributions. During diagnostic inference, we use observed performance monitor values to find the most probable assignment to the workload parameters. In this paper we provide some background on automated bottleneck detection, describe the structure of the system model, and discuss empirical procedures for model calibration and verification. Part of the calibration process includes generating a dataset to estimate a multivariate Gaussian error model. Initial results in diagnosing bottlenecks are presented. ",Automating Computer Bottleneck Detection with Belief Nets " Chain graphs combine directed and undirected graphs and their underlying mathematics combines properties of the two. This paper gives a simplified definition of chain graphs based on a hierarchical combination of Bayesian (directed) and Markov (undirected) networks. Examples of a chain graph are multivariate feed-forward networks, clustering with conditional interaction between variables, and forms of Bayes classifiers. Chain graphs are then extended using the notation of plates so that samples and data analysis problems can be represented in a graphical model as well. Implications for learning are discussed in the conclusion. ",Chain Graphs for Learning " We can perform inference in Bayesian belief networks by enumerating instantiations with high probability thus approximating the marginals. In this paper, we present a method for determining the fraction of instantiations that has to be considered such that the absolute error in the marginals does not exceed a predefined value. The method is based on extreme value theory. Essentially, the proposed method uses the reversed generalized Pareto distribution to model probabilities of instantiations below a given threshold. Based on this distribution, an estimate of the maximal absolute error if instantiations with probability smaller than u are disregarded can be made. ",Error Estimation in Approximate Bayesian Belief Network Inference " The aim of this paper is to present a method for identifying the structure of a rule in a fuzzy model. For this purpose, an ATMS shall be used (Zurita 1994). An algorithm obtaining the identification of the structure will be suggested (Castro 1995). The minimal structure of the rule (with respect to the number of variables that must appear in the rule) will be found by this algorithm. Furthermore, the identification parameters shall be obtained simultaneously. The proposed method shall be applied for classification to an example. The {em Iris Plant Database} shall be learnt for all three kinds of plants. ",Generating the Structure of a Fuzzy Rule under Uncertainty " An approach to fault isolation that exploits vastly incomplete models is presented. It relies on separate descriptions of each component behavior, together with the links between them, which enables focusing of the reasoning to the relevant part of the system. As normal observations do not need explanation, the behavior of the components is limited to anomaly propagation. Diagnostic solutions are disorders (fault modes or abnormal signatures) that are consistent with the observations, as well as abductive explanations. An ordinal representation of uncertainty based on possibility theory provides a simple exception-tolerant description of the component behaviors. We can for instance distinguish between effects that are more or less certainly present (or absent) and effects that are more or less certainly present (or absent) when a given anomaly is present. A realistic example illustrates the benefits of this approach. ","Practical Model-Based Diagnosis with Qualitative Possibilistic Uncertainty" " The development of new methods and representations for temporal decision-making requires a principled basis for characterizing and measuring the flexibility of decision strategies in the face of uncertainty. Our goal in this paper is to provide a framework - not a theory - for observing how decision policies behave in the face of informational perturbations, to gain clues as to how they might behave in the face of unanticipated, possibly unarticulated uncertainties. To this end, we find it beneficial to distinguish between two types of uncertainty: ""Small World"" and ""Large World"" uncertainty. The first type can be resolved by posing an unambiguous question to a ""clairvoyant,"" and is anchored on some well-defined aspect of a decision frame. The second type is more troublesome, yet it is often of greater interest when we address the issue of flexibility; this type of uncertainty can be resolved only by consulting a ""psychic."" We next observe that one approach to flexibility used in the economics literature is already implicitly accounted for in the Maximum Expected Utility (MEU) principle from decision theory. Though simple, the observation establishes the context for a more illuminating notion of flexibility, what we term flexibility with respect to information revelation. We show how to perform flexibility analysis of a static (i.e., single period) decision problem using a simple example, and we observe that the most flexible alternative thus identified is not necessarily the MEU alternative. We extend our analysis for a dynamic (i.e., multi-period) model, and we demonstrate how to calculate the value of flexibility for decision strategies that allow downstream revision of an upstream commitment decision. ",Decision Flexibility " We present a simple characterization of equivalent Bayesian network structures based on local transformations. The significance of the characterization is twofold. First, we are able to easily prove several new invariant properties of theoretical interest for equivalent structures. Second, we use the characterization to derive an efficient algorithm that identifies all of the compelled edges in a structure. Compelled edge identification is of particular importance for learning Bayesian network structures from data because these edges indicate causal relationships when certain assumptions hold. ","A Transformational Characterization of Equivalent Bayesian Network Structures" " We present two algorithms for exact and approximate inference in causal networks. The first algorithm, dynamic conditioning, is a refinement of cutset conditioning that has linear complexity on some networks for which cutset conditioning is exponential. The second algorithm, B-conditioning, is an algorithm for approximate inference that allows one to trade-off the quality of approximations with the computation time. We also present some experimental results illustrating the properties of the proposed algorithms. ","Conditioning Methods for Exact and Approximate Inference in Causal Networks" " In this paper we study different concepts of independence for convex sets of probabilities. There will be two basic ideas for independence. The first is irrelevance. Two variables are independent when a change on the knowledge about one variable does not affect the other. The second one is factorization. Two variables are independent when the joint convex set of probabilities can be decomposed on the product of marginal convex sets. In the case of the Theory of Probability, these two starting points give rise to the same definition. In the case of convex sets of probabilities, the resulting concepts will be strongly related, but they will not be equivalent. As application of the concept of independence, we shall consider the problem of building a global convex set from marginal convex sets of probabilities. ",Independence Concepts for Convex Sets of Probabilities " The undirected technique for evaluating belief networks [Jensen, et.al., 1990, Lauritzen and Spiegelhalter, 1988] requires clustering the nodes in the network into a junction tree. In the traditional view, the junction tree is constructed from the cliques of the moralized and triangulated belief network: triangulation is taken to be the primitive concept, the goal towards which any clustering algorithm (e.g. node elimination) is directed. In this paper, we present an alternative conception of clustering, in which clusters and the junction tree property play the role of primitives: given a graph (not a tree) of clusters which obey (a modified version of) the junction tree property, we transform this graph until we have obtained a tree. There are several advantages to this approach: it is much clearer and easier to understand, which is important for humans who are constructing belief networks; it admits a wider range of heuristics which may enable more efficient or superior clustering algorithms; and it serves as the natural basis for an incremental clustering scheme, which we describe. ",Clustering Without (Thinking About) Triangulation " Bayesian networks provide a method of representing conditional independence between random variables and computing the probability distributions associated with these random variables. In this paper, we extend Bayesian network structures to compute probability density functions for continuous random variables. We make this extension by approximating prior and conditional densities using sums of weighted Gaussian distributions and then finding the propagation rules for updating the densities in terms of these weights. We present a simple example that illustrates the Bayesian network for continuous variables; this example shows the effect of the network structure and approximation errors on the computation of densities for variables in the network. ","Implementation of Continuous Bayesian Networks Using Sums of Weighted Gaussians" " Although the usefulness of belief networks for reasoning under uncertainty is widely accepted, obtaining numerical probabilities that they require is still perceived a major obstacle. Often not enough statistical data is available to allow for reliable probability estimation. Available information may not be directly amenable for encoding in the network. Finally, domain experts may be reluctant to provide numerical probabilities. In this paper, we propose a method for elicitation of probabilities from a domain expert that is non-invasive and accommodates whatever probabilistic information the expert is willing to state. We express all available information, whether qualitative or quantitative in nature, in a canonical form consisting of (in) equalities expressing constraints on the hyperspace of possible joint probability distributions. We then use this canonical form to derive second-order probability distributions over the desired probabilities. ","Elicitation of Probabilities for Belief Networks: Combining Qualitative and Quantitative Information" " Accepting a proposition means that our confidence in this proposition is strictly greater than the confidence in its negation. This paper investigates the subclass of uncertainty measures, expressing confidence, that capture the idea of acceptance, what we call acceptance functions. Due to the monotonicity property of confidence measures, the acceptance of a proposition entails the acceptance of any of its logical consequences. In agreement with the idea that a belief set (in the sense of Gardenfors) must be closed under logical consequence, it is also required that the separate acceptance o two propositions entail the acceptance of their conjunction. Necessity (and possibility) measures agree with this view of acceptance while probability and belief functions generally do not. General properties of acceptance functions are estabilished. The motivation behind this work is the investigation of a setting for belief revision more general than the one proposed by Alchourron, Gardenfors and Makinson, in connection with the notion of conditioning. ",Numerical Representations of Acceptance " The fraud/uncollectible debt problem in the telecommunications industry presents two technical challenges: the detection and the treatment of the account given the detection. In this paper, we focus on the first problem of detection using Bayesian network models, and we briefly discuss the application of a normative expert system for the treatment at the end. We apply Bayesian network models to the problem of fraud/uncollectible debt detection for telecommunication services. In addition to being quite successful at predicting rare event outcomes, it is able to handle a mixture of categorical and continuous data. We present a performance comparison using linear and non-linear discriminant analysis, classification and regression trees, and Bayesian network models ","Fraud/Uncollectible Debt Detection Using a Bayesian Network Based Learning System: A Rare Binary Outcome with Mixed Data Structures" " The Constraint Satisfaction Problem (CSP) framework offers a simple and sound basis for representing and solving simple decision problems, without uncertainty. This paper is devoted to an extension of the CSP framework enabling us to deal with some decisions problems under uncertainty. This extension relies on a differentiation between the agent-controllable decision variables and the uncontrollable parameters whose values depend on the occurrence of uncertain events. The uncertainty on the values of the parameters is assumed to be given under the form of a probability distribution. Two algorithms are given, for computing respectively decisions solving the problem with a maximal probability, and conditional decisions mapping the largest possible amount of possible cases to actual decisions. ",A Constraint Satisfaction Approach to Decision under Uncertainty " We examine a new approach to modeling uncertainty based on plausibility measures, where a plausibility measure just associates with an event its plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The lack of structure in a plausibility measure makes it easy for us to add structure on an ""as needed"" basis, letting us examine what is required to ensure that a plausibility measure has certain properties of interest. This gives us insight into the essential features of the properties in question, while allowing us to prove general results that apply to many approaches to reasoning about uncertainty. Plausibility measures have already proved useful in analyzing default reasoning. In this paper, we examine their ""algebraic properties,"" analogues to the use of + and * in probability theory. An understanding of such properties will be essential if plausibility measures are to be used in practice as a representation tool. ",Plausibility Measures: A User's Guide " This paper concerns the probabilistic evaluation of the effects of actions in the presence of unmeasured variables. We show that the identification of causal effect between a singleton variable X and a set of variables Y can be accomplished systematically, in time polynomial in the number of variables in the graph. When the causal effect is identifiable, a closed-form expression can be obtained for the probability that the action will achieve a specified goal, or a set of goals. ",Testing Identifiability of Causal Effects " We present an algorithm, called Predict, for updating beliefs in causal networks quantified with order-of-magnitude probabilities. The algorithm takes advantage of both the structure and the quantification of the network and presents a polynomial asymptotic complexity. Predict exhibits a conservative behavior in that it is always sound but not always complete. We provide sufficient conditions for completeness and present algorithms for testing these conditions and for computing a complete set of plausible values. We propose Predict as an efficient method to estimate probabilistic values and illustrate its use in conjunction with two known algorithms for probabilistic inference. Finally, we describe an application of Predict to plan evaluation, present experimental results, and discuss issues regarding its use with conditional logics of belief, and in the characterization of irrelevance. ",Fast Belief Update Using Order-of-Magnitude Probabilities " We show how to transform any set of prioritized propositional defaults into an equivalent set of parallel (i.e., unprioritized) defaults, in circumscription. We give an algorithm to implement the transform. We show how to use the transform algorithm as a generator of a whole family of inferencing algorithms for circumscription. The method is to employ the transform algorithm as a front end to any inferencing algorithm, e.g., one of the previously available, that handles the parallel (empty) case of prioritization. Our algorithms provide not just coverage of a new expressive class, but also alternatives to previous algorithms for implementing the previously covered class (?layered?) of prioritization. In particular, we give a new query-answering algorithm for prioritized cirumscription which is sound and complete for the full expressive class of unrestricted finite prioritization partial orders, for propositional defaults (or minimized predicates). By contrast, previous algorithms required that the prioritization partial order be layered, i.e., structured similar to the system of rank in the military. Our algorithm enables, for the first time, the implementation of the most useful class of prioritization: non-layered prioritization partial orders. Default inheritance, for example, typically requires non-layered prioritization to represent specificity adequately. Our algorithm enables not only the implementation of default inheritance (and specificity) within prioritized circumscription, but also the extension and combination of default inheritance with other kinds of prioritized default reasoning, e.g.: with stratified logic programs with negation-as-failure. Such logic programs are previously known to be representable equivalently as layered-priority predicate circumscriptions. Worst-case, the transform increases the number of defaults exponentially. We discuss how inferencing is practically implementable nevertheless in two kinds of situations: general expressiveness but small numbers of defaults, or expressive special cases with larger numbers of defaults. One such expressive special case is non-?top-heaviness? of the prioritization partial order. In addition to its direct implementation, the transform can also be exploited analytically to generate special case algorithms, e.g., a tractable transform for a class within default inheritance (detailed in another, forthcoming paper). We discuss other aspects of the significance of the fundamental result. One can view the transform as reducing n degrees of partially ordered belief confidence to just 2 degrees of confidence: for-sure and (unprioritized) default. Ordinary, parallel default reasoning, e.g., in parallel circumscription or Poole's Theorist, can be viewed in these terms as reducing 2 degrees of confidence to just 1 degree of confidence: that of the non-monotonic theory's conclusions. The expressive reduction's computational complexity suggests that prioritization is valuable for its expressive conciseness, just as defaults are for theirs. For Reiter's Default Logic and Poole's Theorist, the transform implies how to extend those formalisms so as to equip them with a concept of prioritization that is exactly equivalent to that in circumscription. This provides an interesting alternative to Brewka's approach to equipping them with prioritization-type precedence. ",Transforming Prioritized Defaults and Specificity into Parallel Defaults " This paper discusses techniques for performing efficient decision-theoretic planning. We give an overview of the DRIPS decision-theoretic refinement planning system, which uses abstraction to efficiently identify optimal plans. We present techniques for automatically generating search control information, which can significantly improve the planner's performance. We evaluate the efficiency of DRIPS both with and without the search control rules on a complex medical planning problem and compare its performance to that of a branch-and-bound decision tree algorithm. ",Efficient Decision-Theoretic Planning: Techniques and Empirical Analysis " In this paper we deal with a new approach to probabilistic reasoning in a logical framework. Nearly almost all logics of probability that have been proposed in the literature are based on classical two-valued logic. After making clear the differences between fuzzy logic and probability theory, here we propose a {em fuzzy} logic of probability for which completeness results (in a probabilistic sense) are provided. The main idea behind this approach is that probability values of crisp propositions can be understood as truth-values of some suitable fuzzy propositions associated to the crisp ones. Moreover, suggestions and examples of how to extend the formalism to cope with conditional probabilities and with other uncertainty formalisms are also provided. ",Fuzzy Logic and Probability " This paper presents a probabilistic model for reasoning about the state of a system as it changes over time, both due to exogenous and endogenous influences. Our target domain is a class of medical prediction problems that are neither so urgent as to preclude careful diagnosis nor progress so slowly as to allow arbitrary testing and treatment options. In these domains there is typically enough time to gather information about the patient's state and consider alternative diagnoses and treatments, but the temporal interaction between the timing of tests, treatments, and the course of the disease must also be considered. Our approach is to elicit a qualitative structural model of the patient from a human expert---the model identifies important attributes, the way in which exogenous changes affect attribute values, and the way in which the patient's condition changes endogenously. We then elicit probabilistic information to capture the expert's uncertainty about the effects of tests and treatments and the nature and timing of endogenous state changes. This paper describes the model in the context of a problem in treating vehicle accident trauma, and suggests a method for solving the model based on the technique of sequential imputation. A complementary goal of this work is to understand and synthesize a disparate collection of research efforts all using the name ?probabilistic temporal reasoning.? This paper analyzes related work and points out essential differences between our proposed model and other approaches in the literature. ",Probabilistic Temporal Reasoning with Endogenous Change " This is a working paper summarizing results of an ongoing research project whose aim is to uniquely characterize the uncertainty measure for the Dempster-Shafer Theory. A set of intuitive axiomatic requirements is presented, some of their implications are shown, and the proof is given of the minimality of recently proposed measure AU among all measures satisfying the proposed requirements. ","Toward a Characterization of Uncertainty Measure for the Dempster-Shafer Theory" " We present a precise definition of cause and effect in terms of a fundamental notion called unresponsiveness. Our definition is based on Savage's (1954) formulation of decision theory and departs from the traditional view of causation in that our causal assertions are made relative to a set of decisions. An important consequence of this departure is that we can reason about cause locally, not requiring a causal explanation for every dependency. Such local reasoning can be beneficial because it may not be necessary to determine whether a particular dependency is causal to make a decision. Also in this paper, we examine the graphical encoding of causal relationships. We show that influence diagrams in canonical form are an accurate and efficient representation of causal relationships. In addition, we establish a correspondence between canonical form and Pearl's causal theory. ",A Definition and Graphical Representation for Causality " We examine Bayesian methods for learning Bayesian networks from a combination of prior knowledge and statistical data. In particular, we unify the approaches we presented at last year's conference for discrete and Gaussian domains. We derive a general Bayesian scoring metric, appropriate for both domains. We then use this metric in combination with well-known statistical facts about the Dirichlet and normal--Wishart distributions to derive our metrics for discrete and Gaussian domains. ","Learning Bayesian Networks: A Unification for Discrete and Gaussian Domains" " Whereas acausal Bayesian networks represent probabilistic independence, causal Bayesian networks represent causal relationships. In this paper, we examine Bayesian methods for learning both types of networks. Bayesian methods for learning acausal networks are fairly well developed. These methods often employ assumptions to facilitate the construction of priors, including the assumptions of parameter independence, parameter modularity, and likelihood equivalence. We show that although these assumptions also can be appropriate for learning causal networks, we need additional assumptions in order to learn causal networks. We introduce two sufficient assumptions, called {em mechanism independence} and {em component independence}. We show that these new assumptions, when combined with parameter independence, parameter modularity, and likelihood equivalence, allow us to apply methods for learning acausal networks to learn causal networks. ",A Bayesian Approach to Learning Causal Networks " We describe methods for managing the complexity of information displayed to people responsible for making high-stakes, time-critical decisions. The techniques provide tools for real-time control of the configuration and quantity of information displayed to a user, and a methodology for designing flexible human-computer interfaces for monitoring applications. After defining a prototypical set of display decision problems, we introduce the expected value of revealed information (EVRI) and the related measure of expected value of displayed information (EVDI). We describe how these measures can be used to enhance computer displays used for monitoring complex systems. We motivate the presentation by discussing our efforts to employ decision-theoretic control of displays for a time-critical monitoring application at the NASA Mission Control Center in Houston. ",Display of Information for Time-Critical Decision Making " In earlier work, we introduced flexible inference and decision-theoretic metareasoning to address the intractability of normative inference. Here, rather than pursuing the task of computing beliefs and actions with decision models composed of distinctions about uncertain events, we examine methods for inferring beliefs about mathematical truth before an automated theorem prover completes a proof. We employ a Bayesian analysis to update belief in truth, given theorem-proving progress, and show how decision-theoretic methods can be used to determine the value of continuing to deliberate versus taking immediate action in time-critical situations. ","Reasoning, Metareasoning, and Mathematical Truth: Studies of Theorem Proving under Limited Resources" " Bayesian networks offer great potential for use in automating large scale diagnostic reasoning tasks. Gibbs sampling is the main technique used to perform diagnostic reasoning in large richly interconnected Bayesian networks. Unfortunately Gibbs sampling can take an excessive time to generate a representative sample. In this paper we describe and test a number of heuristic strategies for improving sampling in noisy-or Bayesian networks. The strategies include Monte Carlo Markov chain sampling techniques other than Gibbs sampling. Emphasis is put on strategies that can be implemented in distributed systems. ",Improved Sampling for Diagnostic Reasoning in Bayesian Networks " Consider the situation where some evidence e has been entered to a Bayesian network. When performing conflict analysis, sensitivity analysis, or when answering questions like ""What if the finding on X had been y instead of x?"" you need probabilities P (e'| h), where e' is a subset of e, and h is a configuration of a (possibly empty) set of variables. Cautious propagation is a modification of HUGIN propagation into a Shafer-Shenoy-like architecture. It is less efficient than HUGIN propagation; however, it provides easy access to P (e'| h) for a great deal of relevant subsets e'. ",Cautious Propagation in Bayesian Networks " In this paper we extend the influence diagram (ID) representation for decisions under uncertainty. In the standard ID, arrows into a decision node are only informational; they do not represent constraints on what the decision maker can do. We can represent such constraints only indirectly, using arrows to the children of the decision and sometimes adding more variables to the influence diagram, thus making the ID more complicated. Users of influence diagrams often want to represent constraints by arrows into decision nodes. We represent constraints on decisions by allowing relevance arrows into decision nodes. We call the resulting representation information/relevance influence diagrams (IRIDs). Information/relevance influence diagrams allow for direct representation and specification of constrained decisions. We use a combination of stochastic dynamic programming and Gibbs sampling to solve IRIDs. This method is especially useful when exact methods for solving IDs fail. ",Information/Relevance Influence Diagrams " Stochastic simulation algorithms such as likelihood weighting often give fast, accurate approximations to posterior probabilities in probabilistic networks, and are the methods of choice for very large networks. Unfortunately, the special characteristics of dynamic probabilistic networks (DPNs), which are used to represent stochastic temporal processes, mean that standard simulation algorithms perform very poorly. In essence, the simulation trials diverge further and further from reality as the process is observed over time. In this paper, we present simulation algorithms that use the evidence observed at each time step to push the set of trials back towards reality. The first algorithm, ""evidence reversal"" (ER) restructures each time slice of the DPN so that the evidence nodes for the slice become ancestors of the state variables. The second algorithm, called ""survival of the fittest"" sampling (SOF), ""repopulates"" the set of trials at each time step using a stochastic reproduction rate weighted by the likelihood of the evidence according to each trial. We compare the performance of each algorithm with likelihood weighting on the original network, and also investigate the benefits of combining the ER and SOF methods. The ER/SOF combination appears to maintain bounded error independent of the number of time steps in the simulation. ",Stochastic Simulation Algorithms for Dynamic Probabilistic Networks " Sequential decision tasks with incomplete information are characterized by the exploration problem; namely the trade-off between further exploration for learning more about the environment and immediate exploitation of the accrued information for decision-making. Within artificial intelligence, there has been an increasing interest in studying planning-while-learning algorithms for these decision tasks. In this paper we focus on the exploration problem in reinforcement learning and Q-learning in particular. The existing exploration strategies for Q-learning are of a heuristic nature and they exhibit limited scaleability in tasks with large (or infinite) state and action spaces. Efficient experimentation is needed for resolving uncertainties when possible plans are compared (i.e. exploration). The experimentation should be sufficient for selecting with statistical significance a locally optimal plan (i.e. exploitation). For this purpose, we develop a probabilistic hill-climbing algorithm that uses a statistical selection procedure to decide how much exploration is needed for selecting a plan which is, with arbitrarily high probability, arbitrarily close to a locally optimal one. Due to its generality the algorithm can be employed for the exploration strategy of robust Q-learning. An experiment on a relatively complex control task shows that the proposed exploration strategy performs better than a typical exploration strategy. ",Probabilistic Exploration in Planning while Learning " An important issue in the use of expert systems is the so-called brittleness problem. Expert systems model only a limited part of the world. While the explicit management of uncertainty in expert systems itigates the brittleness problem, it is still possible for a system to be used, unwittingly, in ways that the system is not prepared to address. Such a situation may be detected by the method of straw models, first presented by Jensen et al. [1990] and later generalized and justified by Laskey [1991]. We describe an algorithm, which we have implemented, that takes as input an annotated diagnostic Bayesian network (the base model) and constructs, without assistance, a bipartite network to be used as a straw model. We show that in some cases this straw model is better that the independent straw model of Jensen et al., the only other straw model for which a construction algorithm has been designed and implemented. ","On the Detection of Conflicts in Diagnostic Bayesian Networks Using Abstraction" " Dawid, Kjaerulff and Lauritzen (1994) provided a preliminary description of a hybrid between Monte-Carlo sampling methods and exact local computations in junction trees. Utilizing the strengths of both methods, such hybrid inference methods has the potential of expanding the class of problems which can be solved under bounded resources as well as solving problems which otherwise resist exact solutions. The paper provides a detailed description of a particular instance of such a hybrid scheme; namely, combination of exact inference and Gibbs sampling in discrete Bayesian networks. We argue that this combination calls for an extension of the usual message passing scheme of ordinary junction trees. ",HUGS: Combining Exact Inference and Gibbs Sampling in Junction Trees " We show an alternative way of representing a Bayesian belief network by sensitivities and probability distributions. This representation is equivalent to the traditional representation by conditional probabilities, but makes dependencies between nodes apparent and intuitively easy to understand. We also propose a QR matrix representation for the sensitivities and/or conditional probabilities which is more efficient, in both memory requirements and computational speed, than the traditional representation for computer-based implementations of probabilistic inference. We use sensitivities to show that for a certain class of binary networks, the computation time for approximate probabilistic inference with any positive upper bound on the error of the result is independent of the size of the network. Finally, as an alternative to traditional algorithms that use conditional probabilities, we describe an exact algorithm for probabilistic inference that uses the QR-representation for sensitivities and updates probability distributions of nodes in a network according to messages from the neighbors. ","Sensitivities: An Alternative to Conditional Probabilities for Bayesian Belief Networks" " Classically, risk is characterized by a point value probability indicating the likelihood of occurrence of an adverse effect. However, there are domains where the attainability of objective numerical risk characterizations is increasingly being questioned. This paper reviews the arguments in favour of extending classical techniques of risk assessment to incorporate meaningful qualitative and weak quantitative risk characterizations. A technique in which linguistic uncertainty terms are defined in terms of patterns of argument is then proposed. The technique is demonstrated using a prototype computer-based system for predicting the carcinogenic risk due to novel chemical compounds. ",Is There a Role for Qualitative Risk Assessment? " Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in theory, more study is needed to reveal practical algorithms for solving large problems quickly. To encourage future research, we sketch some alternative methods of analysis that rely on the structure of MDPs. ",On the Complexity of Solving Markov Decision Problems " This paper presents correct algorithms for answering the following two questions; (i) Does there exist a causal explanation consistent with a set of background knowledge which explains all of the observed independence facts in a sample? (ii) Given that there is such a causal explanation what are the causal relationships common to every such causal explanation? ",Causal Inference and Causal Explanation with Background Knowledge " A completeness result for d-separation applied to discrete Bayesian networks is presented and it is shown that in a strong measure-theoretic sense almost all discrete distributions for a given network structure are faithful; i.e. the independence facts true of the distribution are all and only those entailed by the network structure. ",Strong Completeness and Faithfulness in Bayesian Networks " We define a context-sensitive temporal probability logic for representing classes of discrete-time temporal Bayesian networks. Context constraints allow inference to be focused on only the relevant portions of the probabilistic knowledge. We provide a declarative semantics for our language. We present a Bayesian network construction algorithm whose generated networks give sound and complete answers to queries. We use related concepts in logic programming to justify our approach. We have implemented a Bayesian network construction algorithm for a subset of the theory and demonstrate it's application to the problem of evaluating the effectiveness of treatments for acute cardiac conditions. ","A Theoretical Framework for Context-Sensitive Temporal Probability Model Construction with Application to Plan Projection" " In recent years there has been a spate of papers describing systems for probabilisitic reasoning which do not use numerical probabilities. In some cases the simple set of values used by these systems make it impossible to predict how a probability will change or which hypothesis is most likely given certain evidence. This paper concentrates on such situations, and suggests a number of ways in which they may be resolved by refining the representation. ",Refining Reasoning in Qualitative Probabilistic Networks " Certain causal models involving unmeasured variables induce no independence constraints among the observed variables but imply, nevertheless, inequality contraints on the observed distribution. This paper derives a general formula for such instrumental variables, that is, exogenous variables that directly affect some variables but not all. With the help of this formula, it is possible to test whether a model involving instrumental variables may account for the data, or, conversely, whether a given variables can be deemed instrumental. ","On the Testability of Causal Models with Latent and Instrumental Variables" " The paper concerns the probabilistic evaluation of plans in the presence of unmeasured variables, each plan consisting of several concurrent or sequential actions. We establish a graphical criterion for recognizing when the effects of a given plan can be predicted from passive observations on measured variables only. When the criterion is satisfied, a closed-form expression is provided for the probability that the plan will achieve a specified goal. ","Probabilistic Evaluation of Sequential Plans from Causal Models with Hidden Variables" " This paper introduces the independent choice logic, and in particular the ""single agent with nature"" instance of the independent choice logic, namely ICLdt. This is a logical framework for decision making uncertainty that extends both logic programming and stochastic models such as influence diagrams. This paper shows how the representation of a decision problem within the independent choice logic can be exploited to cut down the combinatorics of dynamic programming. One of the main problems with influence diagram evaluation techniques is the need to optimise a decision for all values of the 'parents' of a decision variable. In this paper we show how the rule based nature of the ICLdt can be exploited so that we only make distinctions in the values of the information available for a decision that will make a difference to utility. ","Exploiting the Rule Structure for Decision Making within the Independent Choice Logic" " Bayesian belief networks are bing increasingly used as a knowledge representation for diagnostic reasoning. One simple method for conducting diagnostic reasoning is to represent system faults and observations only. In this paper, we investigate how having intermediate nodes-nodes other than fault and observation nodes affects the diagnostic performance of a Bayesian belief network. We conducted a series of experiments on a set of real belief networks for medical diagnosis in liver and bile disease. We compared the effects on diagnostic performance of a two-level network consisting just of disease and finding nodes with that of a network which models intermediate pathophysiological disease states as well. We provide some theoretical evidence for differences observed between the abstracted two-level network and the full network. ","Abstraction in Belief Networks: The Role of Intermediate States in Diagnostic Reasoning" " Typical approaches to plan recognition start from a representation of an agent's possible plans, and reason evidentially from observations of the agent's actions to assess the plausibility of the various candidates. A more expansive view of the task (consistent with some prior work) accounts for the context in which the plan was generated, the mental state and planning process of the agent, and consequences of the agent's actions in the world. We present a general Bayesian framework encompassing this view, and focus on how context can be exploited in plan recognition. We demonstrate the approach on a problem in traffic monitoring, where the objective is to induce the plan of the driver from observation of vehicle movements. Starting from a model of how the driver generates plans, we show how the highway context can appropriately influence the recognizer's interpretation of observed driver behavior. ","Accounting for Context in Plan Recognition, with Application to Traffic Monitoring" " The main goal of this paper is to describe a new pruning method for solving decision trees and game trees. The pruning method for decision trees suggests a slight variant of decision trees that we call scenario trees. In scenario trees, we do not need a conditional probability for each edge emanating from a chance node. Instead, we require a joint probability for each path from the root node to a leaf node. We compare the pruning method to the traditional rollback method for decision trees and game trees. For problems that require Bayesian revision of probabilities, a scenario tree representation with the pruning method is more efficient than a decision tree representation with the rollback method. For game trees, the pruning method is more efficient than the rollback method. ",A New Pruning Method for Solving Decision Trees and Game Trees " The use of directed acyclic graphs (DAGs) to represent conditional independence relations among random variables has proved fruitful in a variety of ways. Recursive structural equation models are one kind of DAG model. However, non-recursive structural equation models of the kinds used to model economic processes are naturally represented by directed cyclic graphs with independent errors, a characterization of conditional independence errors, a characterization of conditional independence constraints is obtained, and it is shown that the result generalizes in a natural way to systems in which the error variables or noises are statistically dependent. For non-linear systems with independent errors a sufficient condition for conditional independence of variables in associated distributions is obtained. ",Directed Cyclic Graphical Representations of Feedback Models " We show that there is a general, informative and reliable procedure for discovering causal relations when, for all the investigator knows, both latent variables and selection bias may be at work. Given information about conditional independence and dependence relations between measured variables, even when latent variables and selection bias may be present, there are sufficient conditions for reliably concluding that there is a causal path from one variable to another, and sufficient conditions for reliably concluding when no such causal path exists. ",Causal Inference in the Presence of Latent Variables and Selection Bias " Probabilistic model-based diagnosis computes the posterior probabilities of failure of components from the prior probabilities of component failure and observations of system behavior. One problem with this method is that such priors are almost never directly available. One of the reasons is that the prior probability estimates include an implicit notion of a time interval over which they are specified -- for example, if the probability of failure of a component is 0.05, is this over the period of a day or is this over a week? A second problem facing probabilistic model-based diagnosis is the modeling of persistence. Say we have an observation about a system at time t_1 and then another observation at a later time t_2. To compute posterior probabilities that take into account both the observations, we need some model of how the state of the system changes from time t_1 to t_2. In this paper, we address these problems using techniques from Reliability theory. We show how to compute the failure prior of a component from an empirical measure of its reliability -- the Mean Time Between Failure (MTBF). We also develop a scheme to model persistence when handling multiple time tagged observations. ",Modeling Failure Priors and Persistence in Model-Based Diagnosis " The goal of diagnosis is to compute good repair strategies in response to anomalous system behavior. In a decision theoretic framework, a good repair strategy has low expected cost. In a general formulation of the problem, the computation of the optimal (lowest expected cost) repair strategy for a system with multiple faults is intractable. In this paper, we consider an interesting and natural restriction on the behavior of the system being diagnosed: (a) the system exhibits faulty behavior if and only if one or more components is malfunctioning. (b) The failures of the system components are independent. Given this restriction on system behavior, we develop a polynomial time algorithm for computing the optimal repair strategy. We then go on to introduce a system hierarchy and the notion of inspecting (testing) components before repair. We develop a linear time algorithm for computing an optimal repair strategy for the hierarchical system which includes both repair and inspection. ","A Polynomial Algorithm for Computing the Optimal Repair Strategy in a System with Independent Component Failures" " The goal of model-based diagnosis is to isolate causes of anomalous system behavior and recommend inexpensive repair actions in response. In general, precomputing optimal repair policies is intractable. To date, investigators addressing this problem have explored approximations that either impose restrictions on the system model (such as a single fault assumption) or compute an immediate best action with limited lookahead. In this paper, we develop a formulation of repair in model-based diagnosis and a repair algorithm that computes optimal sequences of actions. This optimal approach is costly but can be applied to precompute an optimal repair strategy for compact systems. We show how we can exploit a hierarchical system specification to make this approach tractable for large systems. When introducing hierarchy, we also consider the tradeoff between simply replacing a component and decomposing it to repair its subcomponents. The hierarchical repair algorithm is suitable for off-line precomputation of an optimal repair strategy. A modification of the algorithm takes advantage of an iterative deepening scheme to trade off inference time and the quality of the computed strategy. ","Exploiting System Hierarchy to Compute Repair Plans in Probabilistic Model-based Diagnosis" " Standard algorithms for finding the shortest path in a graph require that the cost of a path be additive in edge costs, and typically assume that costs are deterministic. We consider the problem of uncertain edge costs, with potential probabilistic dependencies among the costs. Although these dependencies violate the standard dynamic-programming decomposition, we identify a weaker stochastic consistency condition that justifies a generalized dynamic-programming approach based on stochastic dominance. We present a revised path-planning algorithm and prove that it produces optimal paths under time-dependent uncertain costs. We test the algorithm by applying it to a model of stochastic bus networks, and present empirical performance results comparing it to some alternatives. Finally, we consider extensions of these concepts to a more general class of problems of heuristic search under uncertainty. ",Path Planning under Time-Dependent Uncertainty " We develop a new semantics for defeasible inference based on extended probability measures allowed to take infinitesimal values, on the interpretation of defaults as generalized conditional probability constraints and on a preferred-model implementation of entropy maximization. ","Defaults and Infinitesimals: Defeasible Inference by Nonarchimedean Entropy-Maximization" " This paper develops a simple calculus for order of magnitude reasoning. A semantics is given with soundness and completeness results. Order of magnitude probability functions are easily defined and turn out to be equivalent to kappa functions, which are slight generalizations of Spohn's Natural Conditional Functions. The calculus also gives rise to an order of magnitude decision theory, which can be used to justify an amended version of Pearl's decision theory for kappa functions, although the latter is weaker and less expressive. ",An Order of Magnitude Calculus " This paper discusses a method for implementing a probabilistic inference system based on an extended relational data model. This model provides a unified approach for a variety of applications such as dynamic programming, solving sparse linear equations, and constraint propagation. In this framework, the probability model is represented as a generalized relational database. Subsequent probabilistic requests can be processed as standard relational queries. Conventional database management systems can be easily adopted for implementing such an approximate reasoning system. ",A Method for Implementing a Probabilistic Model as a Relational Database " Recent developments show that Multiply Sectioned Bayesian Networks (MSBNs) can be used for diagnosis of natural systems as well as for model-based diagnosis of artificial systems. They can be applied to single-agent oriented reasoning systems as well as multi-agent distributed probabilistic reasoning systems. Belief propagation between a pair of subnets plays a central role in maintenance of global consistency in a MSBN. This paper studies the operation UpdateBelief, presented originally with MSBNs, for inter-subnet propagation. We analyze how the operation achieves its intended functionality, which provides hints as for how its efficiency can be improved. We then define two new versions of UpdateBelief that reduce the computation time for inter-subnet propagation. One of them is optimal in the sense that the minimum amount of computation for coordinating multi-linkage belief propagation is required. The optimization problem is solved through the solution of a graph-theoretic problem: the minimum weight open tour in a tree. ","Optimization of Inter-Subnet Belief Updating in Multiply Sectioned Bayesian Networks" " In this paper, we present two methods to provide explanations for reasoning with belief functions in the valuation-based systems. One approach, inspired by Strat's method, is based on sensitivity analysis, but its computation is simpler thus easier to implement than Strat's. The other one is to examine the impact of evidence on the conclusion based on the measure of the information content in the evidence. We show the property of additivity for the pieces of evidence that are conditional independent within the context of the valuation-based systems. We will give an example to show how these approaches are applied in an evidential network. ",Generating Explanations for Evidential Reasoning " This paper reports experiments with the causal independence inference algorithm proposed by Zhang and Poole (1994b) on the CPSC network created by Pradhan et al. (1994). It is found that the algorithm is able to answer 420 of the 422 possible zero-observation queries, 94 of 100 randomly generated five-observation queries, 87 of 100 randomly generated ten-observation queries, and 69 of 100 randomly generated twenty-observation queries. ",Inference with Causal Independence in the CPSC Network " The information retrieval systems that are present nowadays are mainly based on full text matching of keywords or topic based classification. This matching of keywords often returns a large number of irrelevant information and this does not meet the users query requirement. In order to solve this problem and to enhance the search using semantic environment, a technique named ontology is implemented for the field of poultry in this paper. Ontology is an emerging technique in the current field of research in semantic environment. This paper constructs ontology using the tool named Protege version 4.0 and this also generates Resource Description Framework schema and XML scripts for using poultry ontology in web. ","An Ontology Construction Approach for the Domain Of Poultry Science Using Protege" " Handling visual complexity is a challenging problem in visualization owing to the subjectiveness of its definition and the difficulty in devising generalizable quantitative metrics. In this paper we address this challenge by measuring the visual complexity of two common forms of cluster-based visualizations: scatter plots and parallel coordinatess. We conceptualize visual complexity as a form of visual uncertainty, which is a measure of the degree of difficulty for humans to interpret a visual representation correctly. We propose an algorithm for estimating visual complexity for the aforementioned visualizations using Allen's interval algebra. We first establish a set of primitive 2-cluster cases in scatter plots and another set for parallel coordinatess based on symmetric isomorphism. We confirm that both are the minimal sets and verify the correctness of their members computationally. We score the uncertainty of each primitive case based on its topological properties, including the existence of overlapping regions, splitting regions and meeting points or edges. We compare a few optional scoring schemes against a set of subjective scores by humans, and identify the one that is the most consistent with the subjective scores. Finally, we extend the 2-cluster measure to k-cluster measure as a general purpose estimator of visual complexity for these two forms of cluster-based visualization. ",Measuring Visual Complexity of Cluster-Based Visualizations " Modification of a conceptual clustering algorithm Cobweb for the purpose of its application for numerical data is offered. Keywords: clustering, algorithm Cobweb, numerical data, fuzzy membership function. ","Modification of conceptual clustering algorithm Cobweb for numerical data using fuzzy membership function" " Modelling of complex systems is mainly based on the decomposition of these systems in autonomous elements, and the identification and definitio9n of possible interactions between these elements. For this, the agent-based approach is a modelling solution often proposed. Complexity can also be due to external events or internal to systems, whose main characteristics are uncertainty, imprecision, or whose perception is subjective (i.e. interpreted). Insofar as fuzzy logic provides a solution for modelling uncertainty, the concept of fuzzy agent can model both the complexity and uncertainty. This paper focuses on introducing the concept of fuzzy agent: a classical architecture of agent is redefined according to a fuzzy perspective. A pedagogical illustration of fuzzy agentification of a smart watering system is then proposed. ",A Modelling Approach Based on Fuzzy Agents " Bayesian learning of belief networks (BLN) is a method for automatically constructing belief networks (BNs) from data using search and Bayesian scoring techniques. K2 is a particular instantiation of the method that implements a greedy search strategy. To evaluate the accuracy of K2, we randomly generated a number of BNs and for each of those we simulated data sets. K2 was then used to induce the generating BNs from the simulated data. We examine the performance of the program, and the factors that influence it. We also present a simple BN model, developed from our results, which predicts the accuracy of K2, when given various characteristics of the data set. ","An Evaluation of an Algorithm for Inductive Learning of Bayesian Belief Networks Usin" " We have previously reported a Bayesian algorithm for determining the coordinates of points in three-dimensional space from uncertain constraints. This method is useful in the determination of biological molecular structure. It is limited, however, by the requirement that the uncertainty in the constraints be normally distributed. In this paper, we present an extension of the original algorithm that allows constraint uncertainty to be represented as a mixture of Gaussians, and thereby allows arbitrary constraint distributions. We illustrate the performance of this algorithm on a problem drawn from the domain of molecular structure determination, in which a multicomponent constraint representation produces a much more accurate solution than the old single component mechanism. The new mechanism uses mixture distributions to decompose the problem into a set of independent problems with unimodal constraint uncertainty. The results of the unimodal subproblems are periodically recombined using Bayes' law, to avoid combinatorial explosion. The new algorithm is particularly suited for parallel implementation. ",Probabilistic Constraint Satisfaction with Non-Gaussian Noise " This paper examines the ""K2"" network scoring metric of Cooper and Herskovits. It shows counterintuitive results from applying this metric to simple networks. One family of noninformative priors is suggested for assigning equal scores to equivalent networks. ",A Bayesian Method Reexamined " Laplace's method, a family of asymptotic methods used to approximate integrals, is presented as a potential candidate for the tool box of techniques used for knowledge acquisition and probabilistic inference in belief networks with continuous variables. This technique approximates posterior moments and marginal posterior distributions with reasonable accuracy [errors are O(n^-2) for posterior means] in many interesting cases. The method also seems promising for computing approximations for Bayes factors for use in the context of model selection, model uncertainty and mixtures of pdfs. The limitations, regularity conditions and computational difficulties for the implementation of Laplace's method are comparable to those associated with the methods of maximum likelihood and posterior mode analysis. ","Laplace's Method Approximations for Probabilistic Inference in Belief Networks with Continuous Variables" " In previous work [BGHK92, BGHK93], we have studied the random-worlds approach -- a particular (and quite powerful) method for generating degrees of belief (i.e., subjective probabilities) from a knowledge base consisting of objective (first-order, statistical, and default) information. But allowing a knowledge base to contain only objective information is sometimes limiting. We occasionally wish to include information about degrees of belief in the knowledge base as well, because there are contexts in which old beliefs represent important information that should influence new beliefs. In this paper, we describe three quite general techniques for extending a method that generates degrees of belief from objective information to one that can make use of degrees of belief as well. All of our techniques are bloused on well-known approaches, such as cross-entropy. We discuss general connections between the techniques and in particular show that, although conceptually and technically quite different, all of the techniques give the same answer when applied to the random-worlds method. ",Generating New Beliefs From Old " Evaluation of counterfactual queries (e.g., ""If A were true, would C have been true?"") is important to fault diagnosis, planning, and determination of liability. In this paper we present methods for computing the probabilities of such queries using the formulation proposed in [Balke and Pearl, 1994], where the antecedent of the query is interpreted as an external action that forces the proposition A to be true. When a prior probability is available on the causal mechanisms governing the domain, counterfactual probabilities can be evaluated precisely. However, when causal knowledge is specified as conditional probabilities on the observables, only bounds can computed. This paper develops techniques for evaluating these bounds, and demonstrates their use in two applications: (1) the determination of treatment efficacy from studies in which subjects may choose their own treatment, and (2) the determination of liability in product-safety litigation. ","Counterfactual Probabilities: Computational Methods, Bounds and Applications" " We discuss the problem of construction of inference procedures which can manipulate with uncertainties measured in ordinal scales and fulfill to the property of strict monotonicity of conclusion. The class of A-valuations of plausibility is considered where operations based only on information about linear ordering of plausibility values are used. In this class the modus ponens generating function fulfiling to the property of strict monotonicity of conclusions is introduced. ","Modus Ponens Generating Function in the Class of ^-valuations of Plausibility" " We propose an integration of possibility theory into non-classical logics. We obtain many formal results that generalize the case where possibility and necessity functions are based on classical logic. We show how useful such an approach is by applying it to reasoning under uncertain and inconsistent information. ",Possibility and Necessity Functions over Non-classical Logics " Some instances of creative thinking require an agent to build and test hypothetical theories. Such a reasoner needs to explore the space of not only those situations that have occurred in the past, but also those that are rationally conceivable. In this paper we present a formalism for exploring the space of conceivable situation-models for those domains in which the knowledge is primarily probabilistic in nature. The formalism seeks to construct consistent, minimal, and desirable situation-descriptions by selecting suitable domain-attributes and dependency relationships from the available domain knowledge. ",Exploratory Model Building " I describe a planning methodology for domains with uncertainty in the form of external events that are not completely predictable. The events are represented by enabling conditions and probabilities of occurrence. The planner is goal-directed and backward chaining, but the subgoals are suggested by analyzing the probability of success of the partial plan rather than being simply the open conditions of the operators in the plan. The partial plan is represented as a Bayesian belief net to compute its probability of success. Since calculating the probability of success of a plan can be very expensive I introduce two other techniques for computing it, one that uses Monte Carlo simulation to estimate it and one based on a Markov chain representation that uses knowledge about the dependencies between the predicates describing the domain. ",Planning with External Events " Bayesian belief network learning algorithms have three basic components: a measure of a network structure and a database, a search heuristic that chooses network structures to be considered, and a method of estimating the probability tables from the database. This paper contributes to all these three topics. The behavior of the Bayesian measure of Cooper and Herskovits and a minimum description length (MDL) measure are compared with respect to their properties for both limiting size and finite size databases. It is shown that the MDL measure has more desirable properties than the Bayesian measure when a distribution is to be learned. It is shown that selecting belief networks with certain minimallity properties is NP-hard. This result justifies the use of search heuristics instead of exact algorithms for choosing network structures to be considered. In some cases, a collection of belief networks can be represented by a single belief network which leads to a new kind of probability table estimation called smoothing. We argue that smoothing can be efficiently implemented by incorporating it in the search heuristic. Experimental results suggest that for learning probabilities of belief networks smoothing is helpful. ",Properties of Bayesian Belief Network Learning Algorithms " Simulation schemes for probabilistic inference in Bayesian belief networks offer many advantages over exact algorithms; for example, these schemes have a linear and thus predictable runtime while exact algorithms have exponential runtime. Experiments have shown that likelihood weighting is one of the most promising simulation schemes. In this paper, we present a new simulation scheme that generates samples more evenly spread in the sample space than the likelihood weighting scheme. We show both theoretically and experimentally that the stratified scheme outperforms likelihood weighting in average runtime and error in estimates of beliefs. ",A Stratified Simulation Scheme for Inference in Bayesian Belief Networks " The expected value of information (EVI) is the most powerful measure of sensitivity to uncertainty in a decision model: it measures the potential of information to improve the decision, and hence measures the expected value of outcome. Standard methods for computing EVI use discrete variables and are computationally intractable for models that contain more than a few variables. Monte Carlo simulation provides the basis for more tractable evaluation of large predictive models with continuous and discrete variables, but so far computation of EVI in a Monte Carlo setting also has appeared impractical. We introduce an approximate approach based on pre-posterior analysis for estimating EVI in Monte Carlo models. Our method uses a linear approximation to the value function and multiple linear regression to estimate the linear model from the samples. The approach is efficient and practical for extremely large models. It allows easy estimation of EVI for perfect or partial information on individual variables or on combinations of variables. We illustrate its implementation within Demos (a decision modeling system), and its application to a large model for crisis transportation planning. ",Efficient Estimation of the Value of Information in Monte Carlo Models " A BN2O network is a two level belief net in which the parent interactions are modeled using the noisy-or interaction model. In this paper we discuss application of the SPI local expression language to efficient inference in large BN2O networks. In particular, we show that there is significant structure, which can be exploited to improve over the Quickscore result. We further describe how symbolic techniques can provide information which can significantly reduce the computation required for computing all cause posterior marginals. Finally, we present a novel approximation technique with preliminary experimental results. ",Symbolic Probabilitistic Inference in Large BN2O Networks " This work proposes action networks as a semantically well-founded framework for reasoning about actions and change under uncertainty. Action networks add two primitives to probabilistic causal networks: controllable variables and persistent variables. Controllable variables allow the representation of actions as directly setting the value of specific events in the domain, subject to preconditions. Persistent variables provide a canonical model of persistence according to which both the state of a variable and the causal mechanism dictating its value persist over time unless intervened upon by an action (or its consequences). Action networks also allow different methods for quantifying the uncertainty in causal relationships, which go beyond traditional probabilistic quantification. This paper describes both recent results and work in progress. ","Action Networks: A Framework for Reasoning about Actions and Change under Uncertainty" " We study the connection between kappa calculus and probabilistic reasoning in diagnosis applications. Specifically, we abstract a probabilistic belief network for diagnosing faults into a kappa network and compare the ordering of faults computed using both methods. We show that, at least for the example examined, the ordering of faults coincide as long as all the causal relations in the original probabilistic network are taken into account. We also provide a formal analysis of some network structures where the two methods will differ. Both kappa rankings and infinitesimal probabilities have been used extensively to study default reasoning and belief revision. But little has been done on utilizing their connection as outlined above. This is partly because the relation between kappa and probability calculi assumes that probabilities are arbitrarily close to one (or zero). The experiments in this paper investigate this relation when this assumption is not satisfied. The reported results have important implications on the use of kappa rankings to enhance the knowledge engineering of uncertainty models. ",On the Relation between Kappa Calculus and Probabilistic Reasoning " When agents devise plans for execution in the real world, they face two important forms of uncertainty: they can never have complete knowledge about the state of the world, and they do not have complete control, as the effects of their actions are uncertain. While most classical planning methods avoid explicit uncertainty reasoning, we believe that uncertainty should be explicitly represented and reasoned about. We develop a probabilistic representation for states and actions, based on belief networks. We define conditional belief nets (CBNs) to capture the probabilistic dependency of the effects of an action upon the state of the world. We also use a CBN to represent the intrinsic relationships among entities in the environment, which persist from state to state. We present a simple projection algorithm to construct the belief network of the state succeeding an action, using the environment CBN model to infer indirect effects. We discuss how the qualitative aspects of belief networks and CBNs make them appropriate for the various stages of the problem solving process, from model construction to the design of planning algorithms. ","A Structured, Probabilistic Representation of Action" " We investigate planning in time-critical domains represented as Markov Decision Processes, showing that search based techniques can be a very powerful method for finding close to optimal plans. To reduce the computational cost of planning in these domains, we execute actions as we construct the plan, and sacrifice optimality by searching to a fixed depth and using a heuristic function to estimate the value of states. Although this paper concentrates on the search algorithm, we also discuss ways of constructing heuristic functions suitable for this approach. Our results show that by interleaving search and execution, close to optimal policies can be found without the computational requirements of other approaches. ",Integrating Planning and Execution in Stochastic Domains " Most algorithms for propagating evidence through belief networks have been exact and exhaustive: they produce an exact (point-valued) marginal probability for every node in the network. Often, however, an application will not need information about every n ode in the network nor will it need exact probabilities. We present the localized partial evaluation (LPE) propagation algorithm, which computes interval bounds on the marginal probability of a specified query node by examining a subset of the nodes in the entire network. Conceptually, LPE ignores parts of the network that are ""too far away"" from the queried node to have much impact on its value. LPE has the ""anytime"" property of being able to produce better solutions (tighter intervals) given more time to consider more of the network. ",Localized Partial Evaluation of Belief Networks " AI planning algorithms have addressed the problem of generating sequences of operators that achieve some input goal, usually assuming that the planning agent has perfect control over and information about the world. Relaxing these assumptions requires an extension to the action representation that allows reasoning both about the changes an action makes and the information it provides. This paper presents an action representation that extends the deterministic STRIPS model, allowing actions to have both causal and informational effects, both of which can be context dependent and noisy. We also demonstrate how a standard least-commitment planning algorithm can be extended to include informational actions and contingent execution. ","A Probabilistic Model of Action for Least-Commitment Planning with Information Gather" " Several Artificial Intelligence schemes for reasoning under uncertainty explore either explicitly or implicitly asymmetries among probabilities of various states of their uncertain domain models. Even though the correct working of these schemes is practically contingent upon the existence of a small number of probable states, no formal justification has been proposed of why this should be the case. This paper attempts to fill this apparent gap by studying asymmetries among probabilities of various states of uncertain models. By rewriting the joint probability distribution over a model's variables into a product of individual variables' prior and conditional probability distributions, and applying central limit theorem to this product, we can demonstrate that the probabilities of individual states of the model can be expected to be drawn from highly skewed, log-normal distributions. With sufficient asymmetry in individual prior and conditional probability distributions, a small fraction of states can be expected to cover a large portion of the total probability space with the remaining states having practically negligible probability. Theoretical discussion is supplemented by simulation results and an illustrative real-world example. ",Some Properties of Joint Probability Distributions " An ordinal view of independence is studied in the framework of possibility theory. We investigate three possible definitions of dependence, of increasing strength. One of them is the counterpart to the multiplication law in probability theory, and the two others are based on the notion of conditional possibility. These two have enough expressive power to support the whole possibility theory, and a complete axiomatization is provided for the strongest one. Moreover we show that weak independence is well-suited to the problems of belief change and plausible reasoning, especially to address the problem of blocking of property inheritance in exception-tolerant taxonomic reasoning. ",An Ordinal View of Independence with Application to Plausible Reasoning " Penalty logic, introduced by Pinkas, associates to each formula of a knowledge base the price to pay if this formula is violated. Penalties may be used as a criterion for selecting preferred consistent subsets in an inconsistent knowledge base, thus inducing a non-monotonic inference relation. A precise formalization and the main properties of penalty logic and of its associated non-monotonic inference relation are given in the first part. We also show that penalty logic and Dempster-Shafer theory are related, especially in the infinitesimal case. ",Penalty logic and its Link with Dempster-Shafer Theory " In this paper, we introduce evidence propagation operations on influence diagrams and a concept of value of evidence, which measures the value of experimentation. Evidence propagation operations are critical for the computation of the value of evidence, general update and inference operations in normative expert systems which are based on the influence diagram (generalized Bayesian network) paradigm. The value of evidence allows us to compute directly an outcome sensitivity, a value of perfect information and a value of control which are used in decision analysis (the science of decision making under uncertainty). More specifically, the outcome sensitivity is the maximum difference among the values of evidence, the value of perfect information is the expected value of the values of evidence, and the value of control is the optimal value of the values of evidence. We also discuss an implementation and a relative computational efficiency issues related to the value of evidence and the value of perfect information. ",Value of Evidence on Influence Diagrams " Possibilistic conditional independence is investigated: we propose a definition of this notion similar to the one used in probability theory. The links between independence and non-interactivity are investigated, and properties of these relations are given. The influence of the conjunction used to define a conditional measure of possibility is also highlighted: we examine three types of conjunctions: Lukasiewicz - like T-norms, product-like T-norms and the minimum operator. ",Conditional Independence in Possibility Theory " Backward simulation is an approximate inference technique for Bayesian belief networks. It differs from existing simulation methods in that it starts simulation from the known evidence and works backward (i.e., contrary to the direction of the arcs). The technique's focus on the evidence leads to improved convergence in situations where the posterior beliefs are dominated by the evidence rather than by the prior probabilities. Since this class of situations is large, the technique may make practical the application of approximate inference in Bayesian belief networks to many real-world problems. ",Backward Simulation in Bayesian Networks " Testing the validity of probabilistic models containing unmeasured (hidden) variables is shown to be a hard task. We show that the task of testing whether models are structurally incompatible with the data at hand, requires an exponential number of independence evaluations, each of the form: ""X is conditionally independent of Y, given Z."" In contrast, a linear number of such evaluations is required to test a standard Bayesian network (one per vertex). On the positive side, we show that if a network with hidden variables G has a tree skeleton, checking whether G represents a given probability model P requires the polynomial number of such independence evaluations. Moreover, we provide an algorithm that efficiently constructs a tree-structured Bayesian network (with hidden variables) that represents P if such a network exists, and further recognizes when such a network does not exist. ","On Testing Whether an Embedded Bayesian Network Represents a Probability Model" " We introduce an approach to high-level conditional planning we call epsilon-safe planning. This probabilistic approach commits us to planning to meet some specified goal with a probability of success of at least 1-epsilon for some user-supplied epsilon. We describe several algorithms for epsilon-safe planning based on conditional planners. The two conditional planners we discuss are Peot and Smith's nonlinear conditional planner, CNLP, and our own linear conditional planner, PLINTH. We present a straightforward extension to conditional planners for which computing the necessary probabilities is simple, employing a commonly-made but perhaps overly-strong independence assumption. We also discuss a second approach to epsilon-safe planning which relaxes this independence assumption, involving the incremental construction of a probability dependence model in conjunction with the construction of the plan graph. ",Epsilon-Safe Planning " We present a method for dynamically generating Bayesian networks from knowledge bases consisting of first-order probability logic sentences. We present a subset of probability logic sufficient for representing the class of Bayesian networks with discrete-valued nodes. We impose constraints on the form of the sentences that guarantee that the knowledge base contains all the probabilistic information necessary to generate a network. We define the concept of d-separation for knowledge bases and prove that a knowledge base with independence conditions defined by d-separation is a complete specification of a probability distribution. We present a network generation algorithm that, given an inference problem in the form of a query Q and a set of evidence E, generates a network to compute P(Q|E). We prove the algorithm to be correct. ",Generating Bayesian Networks from Probability Logic Knowledge Bases " This paper discusses the problem of abstracting conditional probabilistic actions. We identify two distinct types of abstraction: intra-action abstraction and inter-action abstraction. We define what it means for the abstraction of an action to be correct and then derive two methods of intra-action abstraction and two methods of inter-action abstraction which are correct according to this criterion. We illustrate the developed techniques by applying them to actions described with the temporal action representation used in the DRIPS decision-theoretic planner and we describe how the planner uses abstraction to reduce the complexity of planning. ",Abstracting Probabilistic Actions " Heckerman (1993) defined causal independence in terms of a set of temporal conditional independence statements. These statements formalized certain types of causal interaction where (1) the effect is independent of the order that causes are introduced and (2) the impact of a single cause on the effect does not depend on what other causes have previously been applied. In this paper, we introduce an equivalent a temporal characterization of causal independence based on a functional representation of the relationship between causes and the effect. In this representation, the interaction between causes and effect can be written as a nested decomposition of functions. Causal independence can be exploited by representing this decomposition in the belief network, resulting in representations that are more efficient for inference than general causal models. We present empirical results showing the benefits of a causal-independence representation for belief-network inference. ",A New Look at Causal Independence " We describe algorithms for learning Bayesian networks from a combination of user knowledge and statistical data. The algorithms have two components: a scoring metric and a search procedure. The scoring metric takes a network structure, statistical data, and a user's prior knowledge, and returns a score proportional to the posterior probability of the network structure given the data. The search procedure generates networks for evaluation by the scoring metric. Our contributions are threefold. First, we identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simplify the encoding of a user's prior knowledge. In particular, a user can express her knowledge-for the most part-as a single prior Bayesian network for the domain. Second, we describe local search and annealing algorithms to be used in conjunction with scoring metrics. In the special case where each node has at most one parent, we show that heuristic search can be replaced with a polynomial algorithm to identify the networks with the highest score. Third, we describe a methodology for evaluating Bayesian-network learning algorithms. We apply this approach to a comparison of metrics and search procedures. ","Learning Bayesian Networks: The Combination of Knowledge and Statistical Data" " Most traditional models of uncertainty have focused on the associational relationship among variables as captured by conditional dependence. In order to successfully manage intelligent systems for decision making, however, we must be able to predict the effects of actions. In this paper, we attempt to unite two branches of research that address such predictions: causal modeling and decision analysis. First, we provide a definition of causal dependence in decision-analytic terms, which we derive from consequences of causal dependence cited in the literature. Using this definition, we show how causal dependence can be represented within an influence diagram. In particular, we identify two inadequacies of an ordinary influence diagram as a representation for cause. We introduce a special class of influence diagrams, called causal influence diagrams, which corrects one of these problems, and identify situations where the other inadequacy can be eliminated. In addition, we describe the relationships between Howard Canonical Form and existing graphical representations of cause. ",A Decision-Based View of Causality " On the one hand, classical terminological knowledge representation excludes the possibility of handling uncertain concept descriptions involving, e.g., ""usually true"" concept properties, generalized quantifiers, or exceptions. On the other hand, purely numerical approaches for handling uncertainty in general are unable to consider terminological knowledge. This paper presents the language ACP which is a probabilistic extension of terminological logics and aims at closing the gap between the two areas of research. We present the formal semantics underlying the language ALUP and introduce the probabilistic formalism that is based on classes of probabilities and is realized by means of probabilistic constraints. Besides inferring implicitly existent probabilistic relationships, the constraints guarantee terminological and probabilistic consistency. Altogether, the new language ALUP applies to domains where both term descriptions and uncertainty have to be handled. ",Probabilistic Description Logics " Qualitative and infinitesimal probability schemes are consistent with the axioms of probability theory, but avoid the need for precise numerical probabilities. Using qualitative probabilities could substantially reduce the effort for knowledge engineering and improve the robustness of results. We examine experimentally how well infinitesimal probabilities (the kappa-calculus of Goldszmidt and Pearl) perform a diagnostic task - troubleshooting a car that will not start by comparison with a conventional numerical belief network. We found the infinitesimal scheme to be as good as the numerical scheme in identifying the true fault. The performance of the infinitesimal scheme worsens significantly for prior fault probabilities greater than 0.03. These results suggest that infinitesimal probability methods may be of substantial practical value for machine diagnosis with small prior fault probabilities. ","An Experimental Comparison of Numerical and Qualitative Probabilistic Reasoning" " Possibilistic logic, an extension of first-order logic, deals with uncertainty that can be estimated in terms of possibility and necessity measures. Syntactically, this means that a first-order formula is equipped with a possibility degree or a necessity degree that expresses to what extent the formula is possibly or necessarily true. Possibilistic resolution yields a calculus for possibilistic logic which respects the semantics developed for possibilistic logic. A drawback, which possibilistic resolution inherits from classical resolution, is that it may not terminate if applied to formulas belonging to decidable fragments of first-order logic. Therefore we propose an alternative proof method for possibilistic logic. The main feature of this method is that it completely abstracts from a concrete calculus but uses as basic operation a test for classical entailment. We then instantiate possibilistic logic with a terminological logic, which is a decidable subclass o f first-order logic but nevertheless much more expressive than propositional logic. This yields an extension of terminological logics towards the representation of uncertain knowledge which is satisfactory from a semantic as well as algorithmic point of view. ","An Alternative Proof Method for Possibilistic Logic and its Application to Terminological Logics" " We give an axiomatization of confidence transfer - a known conditioning scheme - from the perspective of expectation-based inference in the sense of Gardenfors and Makinson. Then, we use the notion of belief independence to ""filter out"" different proposal s of possibilistic conditioning rules, all are variations of confidence transfer. Among the three rules that we consider, only Dempster's rule of conditioning passes the test of supporting the notion of belief independence. With the use of this conditioning rule, we then show that we can use local computation for computing desired conditional marginal possibilities of the joint possibility satisfying the given constraints. It turns out that our local computation scheme is already proposed by Shenoy. However, our intuitions are completely different from that of Shenoy. While Shenoy just defines a local computation scheme that fits his framework of valuation-based systems, we derive that local computation scheme from II(,8) = tI(,8 I a) * II(a) and appropriate independence assumptions, just like how the Bayesians derive their local computation scheme. ",Possibilistic Conditioning and Propagation " To coordinate with other agents in its environment, an agent needs models of what the other agents are trying to do. When communication is impossible or expensive, this information must be acquired indirectly via plan recognition. Typical approaches to plan recognition start with a specification of the possible plans the other agents may be following, and develop special techniques for discriminating among the possibilities. Perhaps more desirable would be a uniform procedure for mapping plans to general structures supporting inference based on uncertain and incomplete observations. In this paper, we describe a set of methods for converting plans represented in a flexible procedural language to observation models represented as probabilistic belief networks. ",The Automated Mapping of Plans for Plan Recognition " A logic is defined that allows to express information about statistical probabilities and about degrees of belief in specific propositions. By interpreting the two types of probabilities in one common probability space, the semantics given are well suited to model the influence of statistical information on the formation of subjective beliefs. Cross entropy minimization is a key element in these semantics, the use of which is justified by showing that the resulting logic exhibits some very reasonable properties. ",A Logic for Default Reasoning About Probabilities " The paper deals with optimality issues in connection with updating beliefs in networks. We address two processes: triangulation and construction of junction trees. In the first part, we give a simple algorithm for constructing an optimal junction tree from a triangulated network. In the second part, we argue that any exact method based on local calculations must either be less efficient than the junction tree method, or it has an optimality problem equivalent to that of triangulation. ",Optimal Junction Trees " We present an approach to the solution of decision problems formulated as influence diagrams. This approach involves a special triangulation of the underlying graph, the construction of a junction tree with special properties, and a message passing algorithm operating on the junction tree for computation of expected utilities and optimal decision policies. ",From Influence Diagrams to Junction Trees " The paper presents a method for reducing the computational complexity of Bayesian networks through identification and removal of weak dependencies (removal of links from the (moralized) independence graph). The removal of a small number of links may reduce the computational complexity dramatically, since several fill-ins and moral links may be rendered superfluous by the removal. The method is described in terms of impact on the independence graph, the junction tree, and the potential functions associated with these. An empirical evaluation of the method using large real-world networks demonstrates the applicability of the method. Further, the method, which has been implemented in Hugin, complements the approximation method suggested by Jensen & Andersen (1990). ","Reduction of Computational Complexity in Bayesian Networks through Removal of Weak Dependencies" " We explore the issue of refining an existent Bayesian network structure using new data which might mention only a subset of the variables. Most previous works have only considered the refinement of the network's conditional probability parameters, and have not addressed the issue of refining the network's structure. We develop a new approach for refining the network's structure. Our approach is based on the Minimal Description Length (MDL) principle, and it employs an adapted version of a Bayesian network learning algorithm developed in our previous work. One of the adaptations required is to modify the previous algorithm to account for the structure of the existent network. The learning algorithm generates a partial network structure which can then be used to improve the existent network. We also present experimental evidence demonstrating the effectiveness of our approach. ",Using New Data to Refine a Bayesian Network " We view the syntax-based approaches to default reasoning as a model-based diagnosis problem, where each source giving a piece of information is considered as a component. It is formalized in the ATMS framework (each source corresponds to an assumption). We assume then that all sources are independent and ""fail"" with a very small probability. This leads to a probability assignment on the set of candidates, or equivalently on the set of consistent environments. This probability assignment induces a Dempster-Shafer belief function which measures the probability that a proposition can be deduced from the evidence. This belief function can be used in several different ways to define a non-monotonic consequence relation. We study and compare these consequence relations. The -case of prioritized knowledge bases is briefly considered. ",Syntax-based Default Reasoning as Probabilistic Model-based Diagnosis " A model to represent spatial information is presented in this paper. It is based on fuzzy constraints represented as fuzzy geometric relations that can be hierarchically structured. The concept of spatial template is introduced to capture the idea of interrelated objects in two-dimensional space. The representation model is used to specify imprecise or vague information consisting in relative locations and orientations of template objects. It is shown in this paper how a template represented by this model can be matched against a crisp situation to recognize a particular instance of this template. Furthermore, the proximity measure (fuzzy measure) between the instance and the template is worked out - this measure can be interpreted as a degree of similarity. In this context, template recognition can be viewed as a case of fuzzy pattern recognition. The results of this work have been implemented and applied to a complex military problem from which this work originated. ",Fuzzy Geometric Relations to Represent Hierarchical Spatial Information " This paper examines the problem of constructing belief networks to evaluate plans produced by an knowledge-based planner. Techniques are presented for handling various types of complicating plan features. These include plans with context-dependent consequences, indirect consequences, actions with preconditions that must be true during the execution of an action, contingencies, multiple levels of abstraction multiple execution agents with partially-ordered and temporally overlapping actions, and plans which reference specific times and time durations. ",Constructing Belief Networks to Evaluate Plans " This paper describes the best first search strategy used by U-Plan (Mansell 1993a), a planning system that constructs quantitatively ranked plans given an incomplete description of an uncertain environment. U-Plan uses uncertain and incomplete evidence de scribing the environment, characterizes it using a Dempster-Shafer interval, and generates a set of possible world states. Plan construction takes place in an abstraction hierarchy where strategic decisions are made before tactical decisions. Search through this abstraction hierarchy is guided by a quantitative measure (expected fulfillment) based on decision theory. The search strategy is best first with the provision to update expected fulfillment and review previous decisions in the light of planning developments. U-Plan generates multiple plans for multiple possible worlds, and attempts to use existing plans for new world situations. A super-plan is then constructed, based on merging the set of plans and appropriately timed knowledge acquisition operators, which are used to decide between plan alternatives during plan execution. ",Operator Selection While Planning Under Uncertainty " In this paper we describe a framework for model-based diagnosis of dynamic systems, which extends previous work in this field by using and expressing temporal uncertainty in the form of qualitative interval relations a la Allen. Based on a logical framework extended by qualitative and quantitative temporal constraints we show how to describe behavioral models (both consistency- and abductive-based), discuss how to use abstract observations and show how abstract temporal diagnoses are computed. This yields an expressive framework, which allows the representation of complex temporal behavior allowing us to represent temporal uncertainty. Due to its abstraction capabilities computation is made independent of the number of observations and time points in a temporal setting. An example of hepatitis diagnosis is used throughout the paper. ",Model-Based Diagnosis with Qualitative Temporal Uncertainty " Certain classes of problems, including perceptual data understanding, robotics, discovery, and learning, can be represented as incremental, dynamically constructed belief networks. These automatically constructed networks can be dynamically extended and modified as evidence of new individuals becomes available. The main result of this paper is the incremental extension of the singly connected polytree network in such a way that the network retains its singly connected polytree structure after the changes. The algorithm is deterministic and is guaranteed to have a complexity of single node addition that is at most of order proportional to the number of nodes (or size) of the network. Additional speed-up can be achieved by maintaining the path information. Despite its incremental and dynamic nature, the algorithm can also be used for probabilistic inference in belief networks in a fashion similar to other exact inference algorithms. ",Incremental Dynamic Construction of Layered Polytree Networks " We present a symbolic machinery that admits both probabilistic and causal information about a given domain and produces probabilistic statements about the effect of actions and the impact of observations. The calculus admits two types of conditioning operators: ordinary Bayes conditioning, P(y|X = x), which represents the observation X = x, and causal conditioning, P(y|do(X = x)), read the probability of Y = y conditioned on holding X constant (at x) by deliberate action. Given a mixture of such observational and causal sentences, together with the topology of the causal graph, the calculus derives new conditional probabilities of both types, thus enabling one to quantify the effects of actions (and policies) from partially specified knowledge bases, such as Bayesian networks in which some conditional probabilities may not be available. ",A Probabilistic Calculus of Actions " This paper describes a novel approach to planning which takes advantage of decision theory to greatly improve robustness in an uncertain environment. We present an algorithm which computes conditional plans of maximum expected utility. This algorithm relies on a representation of the search space as an AND/OR tree and employs a depth-limit to control computation costs. A numeric robustness factor, which parameterizes the utility function, allows the user to modulate the degree of risk-aversion employed by the planner. Via a look-ahead search, the planning algorithm seeks to find an optimal plan using expected utility as its optimization criterion. We present experimental results obtained by applying our algorithm to a non-deterministic extension of the blocks world domain. Our results demonstrate that the robustness factor governs the degree of risk embodied in the conditional plans computed by our algorithm. ",Robust Planning in Uncertain Environments " This paper examines methods of decision making that are able to accommodate limitations on both the form in which uncertainty pertaining to a decision problem can be realistically represented and the amount of computing time available before a decision must be made. The methods are anytime algorithms in the sense of Boddy and Dean 1991. Techniques are presented for use with Frisch and Haddawy's [1992] anytime deduction system, with an anytime adaptation of Nilsson's [1986] probabilistic logic, and with a probabilistic database model. ",Anytime Decision Making with Imprecise Probabilities " We present several techniques for knowledge engineering of large belief networks (BNs) based on the our experiences with a network derived from a large medical knowledge base. The noisyMAX, a generalization of the noisy-OR gate, is used to model causal in dependence in a BN with multi-valued variables. We describe the use of leak probabilities to enforce the closed-world assumption in our model. We present Netview, a visualization tool based on causal independence and the use of leak probabilities. The Netview software allows knowledge engineers to dynamically view sub-networks for knowledge engineering, and it provides version control for editing a BN. Netview generates sub-networks in which leak probabilities are dynamically updated to reflect the missing portions of the network. ",Knowledge Engineering for Large Belief Networks " While influence diagrams have many advantages as a representation framework for Bayesian decision problems, they have a serious drawback in handling asymmetric decision problems. To be represented in an influence diagram, an asymmetric decision problem must be symmetrized. A considerable amount of unnecessary computation may be involved when a symmetrized influence diagram is evaluated by conventional algorithms. In this paper we present an approach for avoiding such unnecessary computation in influence diagram evaluation. ",Solving Asymmetric Decision Problems with Influence Diagrams " Bayesian Belief Networks (BBNs) are a powerful formalism for reasoning under uncertainty but bear some severe limitations: they require a large amount of information before any reasoning process can start, they have limited contradiction handling capabilities, and their ability to provide explanations for their conclusion is still controversial. There exists a class of reasoning systems, called Truth Maintenance Systems (TMSs), which are able to deal with partially specified knowledge, to provide well-founded explanation for their conclusions, and to detect and handle contradictions. TMSs incorporating measure of uncertainty are called Belief Maintenance Systems (BMSs). This paper describes how a BMS based on probabilistic logic can be applied to BBNs, thus introducing a new class of BBNs, called Ignorant Belief Networks, able to incrementally deal with partially specified conditional dependencies, to provide explanations, and to detect and handle contradictions. ",Belief Maintenance in Bayesian Networks " Independence-based (IB) assignments to Bayesian belief networks were originally proposed as abductive explanations. IB assignments assign fewer variables in abductive explanations than do schemes assigning values to all evidentially supported variables. We use IB assignments to approximate marginal probabilities in Bayesian belief networks. Recent work in belief updating for Bayes networks attempts to approximate posterior probabilities by finding a small number of the highest probability complete (or perhaps evidentially supported) assignments. Under certain assumptions, the probability mass in the union of these assignments is sufficient to obtain a good approximation. Such methods are especially useful for highly-connected networks, where the maximum clique size or the cutset size make the standard algorithms intractable. Since IB assignments contain fewer assigned variables, the probability mass in each assignment is greater than in the respective complete assignment. Thus, fewer IB assignments are sufficient, and a good approximation can be obtained more efficiently. IB assignments can be used for efficiently approximating posterior node probabilities even in cases which do not obey the rather strict skewness assumptions used in previous research. Two algorithms for finding the high probability IB assignments are suggested: one by doing a best-first heuristic search, and another by special-purpose integer linear programming. Experimental results show that this approach is feasible for highly connected belief networks. ","Belief Updating by Enumerating High-Probability Independence-Based Assignments" " In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loopcutset conditioning. We show that global conditioning, as well as loop-cutset conditioning, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (199Oa; 1990b). Nonetheless, this approach provides new opportunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) combining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully. ",Global Conditioning for Probabilistic Inference in Belief Networks " We construct the belief function that quantifies the agent, beliefs about which event of Q will occurred when he knows that the event is selected by a chance set-up and that the probability function associated to the chance set up is only partially known. ",Belief Induced by the Partial Knowledge of the Probabilities " Over time, there have hen refinements in the way that probability distributions are used for representing beliefs. Models which rely on single probability distributions depict a complete ordering among the propositions of interest, yet human beliefs are sometimes not completely ordered. Non-singleton sets of probability distributions can represent partially ordered beliefs. Convex sets are particularly convenient and expressive, but it is known that there are reasonable patterns of belief whose faithful representation require less restrictive sets. The present paper shows that prior ignorance about three or more exclusive alternatives and the emergence of partially ordered beliefs when evidence is obtained defy representation by any single set of distributions, but yield to a representation baud on several uts. The partial order is shown to be a partial qualitative probability which shares some intuitively appealing attributes with probability distributions. ","Ignorance and the Expressiveness of Single- and Set-Valued Probability Models of Belief" " Model-based diagnosis reasons backwards from a functional schematic of a system to isolate faults given observations of anomalous behavior. We develop a fully probabilistic approach to model based diagnosis and extend it to support hierarchical models. Our scheme translates the functional schematic into a Bayesian network and diagnostic inference takes place in the Bayesian network. A Bayesian network diagnostic inference algorithm is modified to take advantage of the hierarchy to give computational gains. ",A Probabilistic Approach to Hierarchical Model-based Diagnosis " The semigraphoid closure of every couple of CI-statements (GI=conditional independence) is a stochastic CI-model. As a consequence of this result it is shown that every probabilistically sound inference rule for CI-model, having at most two antecedents, is derivable from the semigraphoid inference rules. This justifies the use of semigraphoids as approximations of stochastic CI-models in probabilistic reasoning. The list of all 19 potential dominant elements of the mentioned semigraphoid closure is given as a byproduct. ","Semigraphoids Are Two-Antecedental Approximations of Stochastic Conditional Independence Models" " System Z+ [Goldszmidt and Pearl, 1991, Goldszmidt, 1992] is a formalism for reasoning with normality defaults of the form ""typically if phi then + (with strength cf)"" where 6 is a positive integer. The system has a critical shortcoming in that it does not sanction inheritance across exceptional subclasses. In this paper we propose an extension to System Z+ that rectifies this shortcoming by extracting additional conditions between worlds from the defaults database. We show that the additional constraints do not change the notion of the consistency of a database. We also make comparisons with competing default reasoning systems. ",Exceptional Subclasses in Qualitative Probability " By analyzing the relationships among chance, weight of evidence and degree of beliefwe show that the assertion ""probability functions are special cases of belief functions"" and the assertion ""Dempster's rule can be used to combine belief functions based on distinct bodies of evidence"" together lead to an inconsistency in Dempster-Shafer theory. To solve this problem, we must reject some fundamental postulates of the theory. We introduce a new approach for uncertainty management that shares many intuitive ideas with D-S theory, while avoiding this problem. ",A Defect in Dempster-Shafer Theory " One important factor determining the computational complexity of evaluating a probabilistic network is the cardinality of the state spaces of the nodes. By varying the granularity of the state spaces, one can trade off accuracy in the result for computational efficiency. We present an anytime procedure for approximate evaluation of probabilistic networks based on this idea. On application to some simple networks, the procedure exhibits a smooth improvement in approximation quality as computation time increases. This suggests that state-space abstraction is one more useful control parameter for designing real-time probabilistic reasoners. ",State-space Abstraction for Anytime Evaluation of Probabilistic Networks " Probability measures by themselves, are known to be inappropriate for modeling the dynamics of plain belief and their excessively strong measurability constraints make them unsuitable for some representational tasks, e.g. in the context of firstorder knowledge. In this paper, we are therefore going to look for possible alternatives and extensions. We begin by delimiting the general area of interest, proposing a minimal list of assumptions to be satisfied by any reasonable quasi-probabilistic valuation concept. Within this framework, we investigate two particularly interesting kinds of quasi-measures which are not or much less affected by the traditional problems. * Ranking measures, which generalize Spohn-type and possibility measures. * Cumulative measures, which combine the probabilistic and the ranking philosophy, allowing thereby a fine-grained account of static and dynamic belief. ",General Belief Measures " We take a general approach to uncertainty on product spaces, and give sufficient conditions for the independence structures of uncertainty measures to satisfy graphoid properties. Since these conditions are arguably more intuitive than some of the graphoid properties, they can be viewed as explanations why probability and certain other formalisms generate graphoids. The conditions include a sufficient condition for the Intersection property which can still apply even if there is a strong logical relations hip between the variables. We indicate how these results can be used to produce theories of qualitative conditional probability which are semi-graphoids and graphoids. ",Generating Graphoids from Generalised Conditional Probability " This paper studies the connection between probabilistic conditional independence in uncertain reasoning and data dependency in relational databases. As a demonstration of the usefulness of this preliminary investigation, an alternate proof is presented for refuting the conjecture suggested by Pearl and Paz that probabilistic conditional independencies have a complete axiomatization. ",On Axiomatization of Probabilistic Conditional Independencies " In the existing evidential networks with belief functions, the relations among the variables are always represented by joint belief functions on the product space of the involved variables. In this paper, we use conditional belief functions to represent such relations in the network and show some relations of these two kinds of representations. We also present a propagation algorithm for such networks. By analyzing the properties of some special evidential networks with conditional belief functions, we show that the reasoning process can be simplified in such kinds of networks. ",Evidential Reasoning with Conditional Belief Functions " It is well known that conditional independence can be used to factorize a joint probability into a multiplication of conditional probabilities. This paper proposes a constructive definition of inter-causal independence, which can be used to further factorize a conditional probability. An inference algorithm is developed, which makes use of both conditional independence and inter-causal independence to reduce inference complexity in Bayesian networks. ",Inter-causal Independence and Heterogeneous Factorization " We address the problem of causal interpretation of the graphical structure of Bayesian belief networks (BBNs). We review the concept of causality explicated in the domain of structural equations models and show that it is applicable to BBNs. In this view, which we call mechanism-based, causality is defined within models and causal asymmetries arise when mechanisms are placed in the context of a system. We lay the link between structural equations models and BBNs models and formulate the conditions under which the latter can be given causal interpretation. ",Causality in Bayesian Belief Networks " The primary theme of this investigation is a decision theoretic account of conditional ought statements (e.g., ""You ought to do A, if C"") that rectifies glaring deficiencies in classical deontic logic. The resulting account forms a sound basis for qualitative decision theory, thus providing a framework for qualitative planning under uncertainty. In particular, we show that adding causal relationships (in the form of a single graph) as part of an epistemic state is sufficient to facilitate the analysis of action sequences, their consequences, their interaction with observations, their expected utilities and, hence, the synthesis of plans and strategies under uncertainty. ",From Conditional Oughts to Qualitative Decision Theory " We have developed a general Bayesian algorithm for determining the coordinates of points in a three-dimensional space. The algorithm takes as input a set of probabilistic constraints on the coordinates of the points, and an a priori distribution for each point location. The output is a maximum-likelihood estimate of the location of each point. We use the extended, iterated Kalman filter, and add a search heuristic for optimizing its solution under nonlinear conditions. This heuristic is based on the same principle as the simulated annealing heuristic for other optimization problems. Our method uses any probabilistic constraints that can be expressed as a function of the point coordinates (for example, distance, angles, dihedral angles, and planarity). It assumes that all constraints have Gaussian noise. In this paper, we describe the algorithm and show its performance on a set of synthetic data to illustrate its convergence properties, and its applicability to domains such ng molecular structure determination. ","A Probabilistic Algorithm for Calculating Structure: Borrowing from Simulated Annealing" " The problems associated with scaling involve active and challenging research topics in the area of artificial intelligence. The purpose is to solve real world problems by means of AI technologies, in cases where the complexity of representation of the real world problem is potentially combinatorial. In this paper, we present a novel approach to cope with the scaling issues in Bayesian belief networks for ship classification. The proposed approach divides the conceptual model of a complex ship classification problem into a set of small modules that work together to solve the classification problem while preserving the functionality of the original model. The possible ways of explaining sensor returns (e.g., the evidence) for some features, such as portholes along the length of a ship, are sometimes combinatorial. Thus, using an exhaustive approach, which entails the enumeration of all possible explanations, is impractical for larger problems. We present a network structure (referred to as Sequential Decomposition, SD) in which each observation is associated with a set of legitimate outcomes which are consistent with the explanation of each observed piece of evidence. The results show that the SD approach allows one to represent feature-observation relations in a manageable way and achieve the same explanatory power as an exhaustive approach. ","A Study of Scaling Issues in Bayesian Belief Networks for Ship Classification" " This paper addresses the tradeoffs which need to be considered in reasoning using probabilistic network representations, such as Influence Diagrams (IDs). In particular, we examine the tradeoffs entailed in using Temporal Influence Diagrams (TIDs) which adequately capture the temporal evolution of a dynamic system without prohibitive data and computational requirements. Three approaches for TID construction which make different tradeoffs are examined: (1) tailoring the network at each time interval to the data available (rather then just copying the original Bayes Network for all time intervals); (2) modeling the evolution of a parsimonious subset of variables (rather than all variables); and (3) model selection approaches, which seek to minimize some measure of the predictive accuracy of the model without introducing too many parameters, which might cause ""overfitting"" of the model. Methods of evaluating the accuracy/efficiency of the tradeoffs are proposed. ",Tradeoffs in Constructing and Evaluating Temporal Influence Diagrams " Influence diagrams are ideal knowledge representations for Bayesian statistical models. However, these diagrams are difficult for end users to interpret and to manipulate. We present a user-based architecture that enables end users to create and to manipulate the knowledge representation. We use the problem of physicians' interpretation of two-arm parallel randomized clinical trials (TAPRCT) to illustrate the architecture and its use. There are three primary data structures. Elements of statistical models are encoded as subgraphs of a restricted class of influence diagram. The interpretations of those elements are mapped into users' language in a domain-specific, user-based semantic interface, called a patient-flow diagram, in the TAPRCT problem. Pennitted transformations of the statistical model that maintain the semantic relationships of the model are encoded in a metadata-state diagram, called the cohort-state diagram, in the TAPRCT problem. The algorithm that runs the system uses modular actions called construction steps. This framework has been implemented in a system called THOMAS, that allows physicians to interpret the data reported from a TAPRCT. ",End-User Construction of Influence Diagrams for Bayesian Statistics " Dynamic network models (DNMs) are belief networks for temporal reasoning. The DNM methodology combines techniques from time series analysis and probabilistic reasoning to provide (1) a knowledge representation that integrates noncontemporaneous and contemporaneous dependencies and (2) methods for iteratively refining these dependencies in response to the effects of exogenous influences. We use belief-network inference algorithms to perform forecasting, control, and discrete event simulation on DNMs. The belief network formulation allows us to move beyond the traditional assumptions of linearity in the relationships among time-dependent variables and of normality in their probability distributions. We demonstrate the DNM methodology on an important forecasting problem in medicine. We conclude with a discussion of how the methodology addresses several limitations found in traditional time series analyses. ",Forecasting Sleep Apnea with Dynamic Network Models " This paper describes a normative system design that incorporates diagnosis, dynamic evolution, decision making, and information gathering. A single influence diagram demonstrates the design's coherence, yet each activity is more effectively modeled and evaluated separately. Application to offshore oil platforms illustrates the design. For this application, the normative system is embedded in a real-time expert system. ",Normative Engineering Risk Management Systems " We compare the diagnostic accuracy of three diagnostic inference models: the simple Bayes model, the multimembership Bayes model, which is isomorphic to the parallel combination function in the certainty-factor model, and a model that incorporates the noisy OR-gate interaction. The comparison is done on 20 clinicopathological conference (CPC) cases from the American Journal of Medicine-challenging cases describing actual patients often with multiple disorders. We find that the distributions produced by the noisy OR model agree most closely with the gold-standard diagnoses, although substantial differences exist between the distributions and the diagnoses. In addition, we find that the multimembership Bayes model tends to significantly overestimate the posterior probabilities of diseases, whereas the simple Bayes model tends to significantly underestimate the posterior probabilities. Our results suggest that additional work to refine the noisy OR model for internal medicine will be worthwhile. ",Diagnosis of Multiple Faults: A Sensitivity Analysis " The inherent intractability of probabilistic inference has hindered the application of belief networks to large domains. Noisy OR-gates [30] and probabilistic similarity networks [18, 17] escape the complexity of inference by restricting model expressiveness. Recent work in the application of belief-network models to time-series analysis and forecasting [9, 10] has given rise to the additive belief network model (ABNM). We (1) discuss the nature and implications of the approximations made by an additive decomposition of a belief network, (2) show greater efficiency in the induction of additive models when available data are scarce, (3) generalize probabilistic inference algorithms to exploit the additive decomposition of ABNMs, (4) show greater efficiency of inference, and (5) compare results on inference with a simple additive belief network. ",Additive Belief-Network Models " Spiegelhalter and Lauritzen [15] studied sequential learning in Bayesian networks and proposed three models for the representation of conditional probabilities. A forth model, shown here, assumes that the parameter distribution is given by a product of Gaussian functions and updates them from the _ and _r messages of evidence propagation. We also generalize the noisy OR-gate for multivalued variables, develop the algorithm to compute probability in time proportional to the number of parents (even in networks with loops) and apply the learning model to this gate. ",Parameter Adjustment in Bayes Networks. The generalized noisy OR-gate " Relational models for diagnosis are based on a direct description of the association between disorders and manifestations. This type of model has been specially used and developed by Reggia and his co-workers in the late eighties as a basic starting point for approaching diagnosis problems. The paper proposes a new relational model which includes Reggia's model as a particular case and which allows for a more expressive representation of the observations and of the manifestations associated with disorders. The model distinguishes, i) between manifestations which are certainly absent and those which are not (yet) observed, and ii) between manifestations which cannot be caused by a given disorder and manifestations for which we do not know if they can or cannot be caused by this disorder. This new model, which can handle uncertainty in a non-probabilistic way, is based on possibility theory and so-called twofold fuzzy sets, previously introduced by the authors. ","A fuzzy relation-based extension of Reggia's relational model for diagnosis handling uncertain and incomplete information" " From an inconsistent database non-trivial arguments may be constructed both for a proposition, and for the contrary of that proposition. Therefore, inconsistency in a logical database causes uncertainty about which conclusions to accept. This kind of uncertainty is called logical uncertainty. We define a concept of ""acceptability"", which induces a means for differentiating arguments. The more acceptable an argument, the more confident we are in it. A specific interest is to use the acceptability classes to assign linguistic qualifiers to propositions, such that the qualifier assigned to a propositions reflects its logical uncertainty. A more general interest is to understand how classes of acceptability can be defined for arguments constructed from an inconsistent database, and how this notion of acceptability can be devised to reflect different criteria. Whilst concentrating on the aspects of assigning linguistic qualifiers to propositions, we also indicate the more general significance of the notion of acceptability. ",Dialectic Reasoning with Inconsistent Information " I introduce a temporal belief-network representation of causal independence that a knowledge engineer can use to elicit probabilistic models. Like the current, atemporal belief-network representation of causal independence, the new representation makes knowledge acquisition tractable. Unlike the atemproal representation, however, the temporal representation can simplify inference, and does not require the use of unobservable variables. The representation is less general than is the atemporal representation, but appears to be useful for many practical applications. ",Causal Independence for Knowledge Acquisition and Inference " We take a utility-based approach to categorization. We construct generalizations about events and actions by considering losses associated with failing to distinguish among detailed distinctions in a decision model. The utility-based methods transform detailed states of the world into more abstract categories comprised of disjunctions of the states. We show how we can cluster distinctions into groups of distinctions at progressively higher levels of abstraction, and describe rules for decision making with the abstractions. The techniques introduce a utility-based perspective on the nature of concepts, and provide a means of simplifying decision models used in automated reasoning systems. We demonstrate the techniques by describing the capabilities and output of TUBA, a program for utility-based abstraction. ",Utility-Based Abstraction and Categorization " When eliciting probability models from experts, knowledge engineers may compare the results of the model with expert judgment on test scenarios, then adjust model parameters to bring the behavior of the model more in line with the expert's intuition. This paper presents a methodology for analytic computation of sensitivity values to measure the impact of small changes in a network parameter on a target probability value or distribution. These values can be used to guide knowledge elicitation. They can also be used in a gradient descent algorithm to estimate parameter values that maximize a measure of goodness-of-fit to both local and holistic probability assessments. ",Sensitivity Analysis for Probability Assessments in Bayesian Networks " Causal Models are like Dependency Graphs and Belief Nets in that they provide a structure and a set of assumptions from which a joint distribution can, in principle, be computed. Unlike Dependency Graphs, Causal Models are models of hierarchical and/or parallel processes, rather than models of distributions (partially) known to a model builder through some sort of gestalt. As such, Causal Models are more modular, easier to build, more intuitive, and easier to understand than Dependency Graph Models. Causal Models are formally defined and Dependency Graph Models are shown to be a special case of them. Algorithms supporting inference are presented. Parsimonious methods for eliciting dependent probabilities are presented. ",Causal Modeling " One topic that is likely to attract an increasing amount of attention within the Knowledge-base systems research community is the coordination of information provided by multiple experts. We envision a situation in which several experts independently encode information as belief networks. A potential user must then coordinate the conclusions and recommendations of these networks to derive some sort of consensus. One approach to such a consensus is the fusion of the contributed networks into a single, consensus model prior to the consideration of any case-specific data (specific observations, test results). This approach requires two types of combination procedures, one for probabilities, and one for graphs. Since the combination of probabilities is relatively well understood, the key barriers to this approach lie in the realm of graph theory. This paper provides formal definitions of some of the operations necessary to effect the necessary graphical combinations, and provides complexity analyses of these procedures. The paper's key result is that most of these operations are NP-hard, and its primary message is that the derivation of ?good? consensus networks must be done heuristically. ",Some Complexity Considerations in the Combination of Belief Networks " This paper identifies and solves a new optimization problem: Given a belief network (BN) and a target ordering on its variables, how can we efficiently derive its minimal I-map whose arcs are consistent with the target ordering? We present three solutions to this problem, all of which lead to directed acyclic graphs based on the original BN's recursive basis relative to the specified ordering (such a DAG is sometimes termed the boundary DAG drawn from the given BN relative to the said ordering [5]). Along the way, we also uncover an important general principal about arc reversals: when reordering a BN according to some target ordering, (while attempting to minimize the number of arcs generated), the sequence of arc reversals should follow the topological ordering induced by the original belief network's arcs to as great an extent as possible. These results promise to have a significant impact on the derivation of consensus models, as well as on other algorithms that require the reconfiguration and/or combination of BN's. ","Deriving a Minimal I-map of a Belief Network Relative to a Target Ordering of its Nodes" " Probabilistic conceptual network is a knowledge representation scheme designed for reasoning about concepts and categorical abstractions in utility-based categorization. The scheme combines the formalisms of abstraction and inheritance hierarchies from artificial intelligence, and probabilistic networks from decision analysis. It provides a common framework for representing conceptual knowledge, hierarchical knowledge, and uncertainty. It facilitates dynamic construction of categorization decision models at varying levels of abstraction. The scheme is applied to an automated machining problem for reasoning about the state of the machine at varying levels of abstraction in support of actions for maintaining competitiveness of the plant. ","Probabilistic Conceptual Network: A Belief Representation Scheme for Utility-Based Categorization" " We investigate the value of extending the completeness of a decision model along different dimensions of refinement. Specifically, we analyze the expected value of quantitative, conceptual, and structural refinement of decision models. We illustrate the key dimensions of refinement with examples. The analyses of value of model refinement can be used to focus the attention of an analyst or an automated reasoning system on extensions of a decision model associated with the greatest expected value. ","Reasoning about the Value of Decision-Model Refinement: Methods and Application" " Problems of probabilistic inference and decision making under uncertainty commonly involve continuous random variables. Often these are discretized to a few points, to simplify assessments and computations. An alternative approximation is to fit analytically tractable continuous probability distributions. This approach has potential simplicity and accuracy advantages, especially if variables can be transformed first. This paper shows how a minimum relative entropy criterion can drive both transformation and fitting, illustrating with a power and logarithm family of transformations and mixtures of Gaussian (normal) distributions, which allow use of efficient influence diagram methods. The fitting procedure in this case is the well-known EM algorithm. The selection of the number of components in a fitted mixture distribution is automated with an objective that trades off accuracy and computational cost. ","Mixtures of Gaussians and Minimum Relative Entropy Techniques for Modeling Continuous Uncertainties" " Valuation networks have been proposed as graphical representations of valuation-based systems (VBSs). The VBS framework is able to capture many uncertainty calculi including probability theory, Dempster-Shafer's belief-function theory, Spohn's epistemic belief theory, and Zadeh's possibility theory. In this paper, we show how valuation networks encode conditional independence relations. For the probabilistic case, the class of probability models encoded by valuation networks includes undirected graph models, directed acyclic graph models, directed balloon graph models, and recursive causal graph models. ",Valuation Networks and Conditional Independence " Relevance-based explanation is a scheme in which partial assignments to Bayesian belief network variables are explanations (abductive conclusions). We allow variables to remain unassigned in explanations as long as they are irrelevant to the explanation, where irrelevance is defined in terms of statistical independence. When multiple-valued variables exist in the system, especially when subsets of values correspond to natural types of events, the over specification problem, alleviated by independence-based explanation, resurfaces. As a solution to that, as well as for addressing the question of explanation specificity, it is desirable to collapse such a subset of values into a single value on the fly. The equivalent method, which is adopted here, is to generalize the notion of assignments to allow disjunctive assignments. We proceed to define generalized independence based explanations as maximum posterior probability independence based generalized assignments (GIB-MAPs). GIB assignments are shown to have certain properties that ease the design of algorithms for computing GIB-MAPs. One such algorithm is discussed here, as well as suggestions for how other algorithms may be adapted to compute GIB-MAPs. GIB-MAP explanations still suffer from instability, a problem which may be addressed using ?approximate? conditional independence as a condition for irrelevance. ",Relevant Explanations: Allowing Disjunctive Assignments " The Noisy-Or model is convenient for describing a class of uncertain relationships in Bayesian networks [Pearl 1988]. Pearl describes the Noisy-Or model for Boolean variables. Here we generalize the model to nary input and output variables and to arbitrary functions other than the Boolean OR function. This generalization is a useful modeling aid for construction of Bayesian networks. We illustrate with some examples including digital circuit diagnosis and network reliability analysis. ",A Generalization of the Noisy-Or Model " We present a mechanism for constructing graphical models, specifically Bayesian networks, from a knowledge base of general probabilistic information. The unique feature of our approach is that it uses a powerful first-order probabilistic logic for expressing the general knowledge base. This logic allows for the representation of a wide range of logical and probabilistic information. The model construction procedure we propose uses notions from direct inference to identify pieces of local statistical information from the knowledge base that are most appropriate to the particular event we want to reason about. These pieces are composed to generate a joint probability distribution specified as a Bayesian network. Although there are fundamental difficulties in dealing with fully general knowledge, our procedure is practical for quite rich knowledge bases and it supports the construction of a far wider range of networks than allowed for by current template technology. ","Using First-Order Probability Logic for the Construction of Bayesian Networks" " PAGODA (Probabilistic Autonomous Goal-Directed Agent) is a model for autonomous learning in probabilistic domains [desJardins, 1992] that incorporates innovative techniques for using the agent's existing knowledge to guide and constrain the learning process and for representing, reasoning with, and learning probabilistic knowledge. This paper describes the probabilistic representation and inference mechanism used in PAGODA. PAGODA forms theories about the effects of its actions and the world state on the environment over time. These theories are represented as conditional probability distributions. A restriction is imposed on the structure of the theories that allows the inference mechanism to find a unique predicted distribution for any action and world state description. These restricted theories are called uniquely predictive theories. The inference mechanism, Probability Combination using Independence (PCI), uses minimal independence assumptions to combine the probabilities in a theory to make probabilistic predictions. ","Representing and Reasoning With Probabilistic Knowledge: A Bayesian Approach" " One of the most difficult aspects of modeling complex dilemmas in decision-analytic terms is composing a diagram of relevance relations from a set of domain concepts. Decision models in domains such as medicine, however, exhibit certain prototypical patterns that can guide the modeling process. Medical concepts can be classified according to semantic types that have characteristic positions and typical roles in an influence-diagram model. We have developed a graph-grammar production system that uses such inherent interrelationships among medical terms to facilitate the modeling of medical decisions. ",Graph-Grammar Assistance for Automated Generation of Influence Diagrams " In previous work we developed a method of learning Bayesian Network models from raw data. This method relies on the well known minimal description length (MDL) principle. The MDL principle is particularly well suited to this task as it allows us to tradeoff, in a principled way, the accuracy of the learned network against its practical usefulness. In this paper we present some new results that have arisen from our work. In particular, we present a new local way of computing the description length. This allows us to make significant improvements in our search algorithm. In addition, we modify our algorithm so that it can take into account partial domain information that might be provided by a domain expert. The local computation of description length also opens the door for local refinement of an existent network. The feasibility of our approach is demonstrated by experiments involving networks of a practical size. ",Using Causal Information and Local Measures to Learn Bayesian Networks " As belief networks are used to model increasingly complex situations, the need to automatically construct them from large databases will become paramount. This paper concentrates on solving a part of the belief network induction problem: that of learning the quantitative structure (the conditional probabilities), given the qualitative structure. In particular, a theory is presented that shows how to propagate inference distributions in a belief network, with the only assumption being that the given qualitative structure is correct. Most inference algorithms must make at least this assumption. The theory is based on four network transformations that are sufficient for any inference in a belief network. Furthermore, the claim is made that contrary to popular belief, error will not necessarily grow as the inference chain grows. Instead, for QBN belief nets induced from large enough samples, the error is more likely to decrease as the size of the inference chain increases. ",Minimal Assumption Distribution Propagation in Belief Networks " Previous algorithms for the construction of Bayesian belief network structures from data have been either highly dependent on conditional independence (CI) tests, or have required an ordering on the nodes to be supplied by the user. We present an algorithm that integrates these two approaches - CI tests are used to generate an ordering on the nodes from the database which is then used to recover the underlying Bayesian network structure using a non CI based method. Results of preliminary evaluation of the algorithm on two networks (ALARM and LED) are presented. We also discuss some algorithm performance issues and open problems. ","An Algorithm for the Construction of Bayesian Network Structures from Data" " This paper addresses learning stochastic rules especially on an inter-attribute relation based on a Minimum Description Length (MDL) principle with a finite number of examples, assuming an application to the design of intelligent relational database systems. The stochastic rule in this paper consists of a model giving the structure like the dependencies of a Bayesian Belief Network (BBN) and some stochastic parameters each indicating a conditional probability of an attribute value given the state determined by the other attributes' values in the same record. Especially, we propose the extended version of the algorithm of Chow and Liu in that our learning algorithm selects the model in the range where the dependencies among the attributes are represented by some general plural number of trees. ","A Construction of Bayesian Networks from Databases Based on an MDL Principle" " Numerous methods for probabilistic reasoning in large, complex belief or decision networks are currently being developed. There has been little research on automating the dynamic, incremental construction of decision models. A uniform value-driven method of decision model construction is proposed for the hierarchical complete diagnosis. Hierarchical complete diagnostic reasoning is formulated as a stochastic process and modeled using influence diagrams. Given observations, this method creates decision models in order to obtain the best actions sequentially for locating and repairing a fault at minimum cost. This method construct decision models incrementally, interleaving probe actions with model construction and evaluation. The method treats meta-level and baselevel tasks uniformly. That is, the method takes a decision-theoretic look at the control of search in causal pathways and structural hierarchies. ","Knowledge-Based Decision Model Construction for Hierarchical Diagnosis: A Preliminary Report" " We describe the integration of logical and uncertain reasoning methods to identify the likely source and location of software problems. To date, software engineers have had few tools for identifying the sources of error in complex software packages. We describe a method for diagnosing software problems through combining logical and uncertain reasoning analyses. Our preliminary results suggest that such methods can be of value in directing the attention of software engineers to paths of an algorithm that have the highest likelihood of harboring a programming error. ","A Synthesis of Logical and Probabilistic Reasoning for Program Understanding and Debugging" " In recent years the belief network has been used increasingly to model systems in Al that must perform uncertain inference. The development of efficient algorithms for probabilistic inference in belief networks has been a focus of much research in AI. Efficient algorithms for certain classes of belief networks have been developed, but the problem of reporting the uncertainty in inferred probabilities has received little attention. A system should not only be capable of reporting the values of inferred probabilities and/or the favorable choices of a decision; it should report the range of possible error in the inferred probabilities and/or choices. Two methods have been developed and implemented for determining the variance in inferred probabilities in belief networks. These methods, the Approximate Propagation Method and the Monte Carlo Integration Method are discussed and compared in this paper. ","An Implementation of a Method for Computing the Uncertainty in Inferred Probabilities in Belief Networks" " Propositional representation services such as truth maintenance systems offer powerful support for incremental, interleaved, problem-model construction and evaluation. Probabilistic inference systems, in contrast, have lagged behind in supporting this incrementality typically demanded by problem solvers. The problem, we argue, is that the basic task of probabilistic inference is typically formulated at too large a grain-size. We show how a system built around a smaller grain-size inference task can have the desired incrementality and serve as the basis for a low-level (propositional) probabilistic representation service. ",Incremental Probabilistic Inference " We describe a method for time-critical decision making involving sequential tasks and stochastic processes. The method employs several iterative refinement routines for solving different aspects of the decision making problem. This paper concentrates on the meta-level control problem of deliberation scheduling, allocating computational resources to these routines. We provide different models corresponding to optimization problems that capture the different circumstances and computational strategies for decision making under time constraints. We consider precursor models in which all decision making is performed prior to execution and recurrent models in which decision making is performed in parallel with execution, accounting for the states observed during execution and anticipating future states. We describe algorithms for precursor and recurrent models and provide the results of our empirical investigations to date. ",Deliberation Scheduling for Time-Critical Sequential Decision Making " Intercausal reasoning is a common inference pattern involving probabilistic dependence of causes of an observed common effect. The sign of this dependence is captured by a qualitative property called product synergy. The current definition of product synergy is insufficient for intercausal reasoning where there are additional uninstantiated causes of the common effect. We propose a new definition of product synergy and prove its adequacy for intercausal reasoning with direct and indirect evidence for the common effect. The new definition is based on a new property matrix half positive semi-definiteness, a weakened form of matrix positive semi-definiteness. ",Intercausal Reasoning with Uninstantiated Ancestor Nodes " We examine two types of similarity networks each based on a distinct notion of relevance. For both types of similarity networks we present an efficient inference algorithm that works under the assumption that every event has a nonzero probability of occurrence. Another inference algorithm is developed for type 1 similarity networks that works under no restriction, albeit less efficiently. ",Inference Algorithms for Similarity Networks " Two algorithms are presented for ""compiling"" influence diagrams into a set of simple decision rules. These decision rules define simple-to-execute, complete, consistent, and near-optimal decision procedures. These compilation algorithms can be used to derive decision procedures for human teams solving time constrained decision problems. ",Two Procedures for Compiling Influence Diagrams " Given a belief network with evidence, the task of finding the I most probable explanations (MPE) in the belief network is that of identifying and ordering the I most probable instantiations of the non-evidence nodes of the belief network. Although many approaches have been proposed for solving this problem, most work only for restricted topologies (i.e., singly connected belief networks). In this paper, we will present a new approach for finding I MPEs in an arbitrary belief network. First, we will present an algorithm for finding the MPE in a belief network. Then, we will present a linear time algorithm for finding the next MPE after finding the first MPE. And finally, we will discuss the problem of finding the MPE for a subset of variables of a belief network, and show that the problem can be efficiently solved by this approach. ",An efficient approach for finding the MPE in belief networks " This paper describes ongoing research into planning in an uncertain environment. In particular, it introduces U-Plan, a planning system that constructs quantitatively ranked plans given an incomplete description of the state of the world. U-Plan uses a DempsterShafer interval to characterise uncertain and incomplete information about the state of the world. The planner takes as input what is known about the world, and constructs a number of possible initial states with representations at different abstraction levels. A plan is constructed for the initial state with the greatest support, and this plan is tested to see if it will work for other possible initial states. All, part, or none of the existing plans may be used in the generation of the plans for the remaining possible worlds. Planning takes place in an abstraction hierarchy where strategic decisions are made before tactical decisions. A super-plan is then constructed, based on merging the set of plans and the appropriately timed acquisition of essential knowledge, which is used to decide between plan alternatives. U-Plan usually produces a super-plan in less time than a classical planner would take to produce a set of plans, one for each possible world. ",A Method for Planning Given Uncertain and Incomplete Information " This paper discusses how conflicts (as used by the consistency-based diagnosis community) can be adapted to be used in a search-based algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an ""anytime"" algorithm, that at any stage can estimate the probabilities and give an error bound. Whereas the most popular Bayesian net algorithms exploit the structure of the network for efficiency, we exploit probability distributions for efficiency; this algorithm is most suited to the case with extreme probabilities. This paper presents a solution to the inefficiencies found in naive algorithms, and shows how the tools of the consistency-based diagnosis community (namely conflicts) can be used effectively to improve the efficiency. Empirical results with networks having tens of thousands of nodes are presented. ",The use of conflicts in searching Bayesian networks " Bayesian belief networks can be used to represent and to reason about complex systems with uncertain, incomplete and conflicting information. Belief networks are graphs encoding and quantifying probabilistic dependence and conditional independence among variables. One type of reasoning of interest in diagnosis is called abductive inference (determination of the global most probable system description given the values of any partial subset of variables). In some cases, abductive inference can be performed with exact algorithms using distributed network computations but it is an NP-hard problem and complexity increases drastically with the presence of undirected cycles, number of discrete states per variable, and number of variables in the network. This paper describes an approximate method based on genetic algorithms to perform abductive inference in large, multiply connected networks for which complexity is a concern when using most exact methods and for which systematic search methods are not feasible. The theoretical adequacy of the method is discussed and preliminary experimental results are presented. ","GALGO: A Genetic ALGOrithm Decision Support Tool for Complex Uncertain Systems Modeled with Bayesian Belief Networks" " Tree structures have been shown to provide an efficient framework for propagating beliefs [Pearl,1986]. This paper studies the problem of finding an optimal approximating tree. The star decomposition scheme for sets of three binary variables [Lazarsfeld,1966; Pearl,1986] is shown to enhance the class of probability distributions that can support tree structures; such structures are called tree-decomposable structures. The logarithm scoring rule is found to be an appropriate optimality criterion to evaluate different tree-decomposable structures. Characteristics of such structures closest to the actual belief network are identified using the logarithm rule, and greedy and exact techniques are developed to find the optimal approximation. ",Using Tree-Decomposable Structures to Approximate Belief Networks " The potential influence diagram is a generalization of the standard ""conditional"" influence diagram, a directed network representation for probabilistic inference and decision analysis [Ndilikilikesha, 1991]. It allows efficient inference calculations corresponding exactly to those on undirected graphs. In this paper, we explore the relationship between potential and conditional influence diagrams and provide insight into the properties of the potential influence diagram. In particular, we show how to convert a potential influence diagram into a conditional influence diagram, and how to view the potential influence diagram operations in terms of the conditional influence diagram. ","Using Potential Influence Diagrams for Probabilistic Inference and Decision Making" " In order to find a causal explanation for data presented in the form of covariance and concentration matrices it is necessary to decide if the graph formed by such associations is a projection of a directed acyclic graph (dag). We show that the general problem of deciding whether such a dag exists is NP-complete. ",Deciding Morality of Graphs is NP-complete " To determine the value of perfect information in an influence diagram, one needs first to modify the diagram to reflect the change in information availability, and then to compute the optimal expected values of both the original diagram and the modified diagram. The value of perfect information is the difference between the two optimal expected values. This paper is about how to speed up the computation of the optimal expected value of the modified diagram by making use of the intermediate computation results obtained when computing the optimal expected value of the original diagram. ","Incremental computation of the value of perfect information in stepwise-decomposable influence diagrams" " This paper presents and discusses several methods for reasoning from inconsistent knowledge bases. A so-called argumentative-consequence relation taking into account the existence of consistent arguments in favor of a conclusion and the absence of consistent arguments in favor of its contrary, is particularly investigated. Flat knowledge bases, i.e. without any priority between their elements, as well as prioritized ones where some elements are considered as more strongly entrenched than others are studied under different consequence relations. Lastly a paraconsistent-like treatment of prioritized knowledge bases is proposed, where both the level of entrenchment and the level of paraconsistency attached to a formula are propagated. The priority levels are handled in the framework of possibility theory. ",Argumentative inference in uncertain and inconsistent knowledge bases " A major reason behind the success of probability calculus is that it possesses a number of valuable tools, which are based on the notion of probabilistic independence. In this paper, I identify a notion of logical independence that makes some of these tools available to a class of propositional databases, called argument databases. Specifically, I suggest a graphical representation of argument databases, called argument networks, which resemble Bayesian networks. I also suggest an algorithm for reasoning with argument networks, which resembles a basic algorithm for reasoning with Bayesian networks. Finally, I show that argument networks have several applications: Nonmonotonic reasoning, truth maintenance, and diagnosis. ",Argument Calculus and Networks " Argumentation is the process of constructing arguments about propositions, and the assignment of statements of confidence to those propositions based on the nature and relative strength of their supporting arguments. The process is modelled as a labelled deductive system, in which propositions are doubly labelled with the grounds on which they are based and a representation of the confidence attached to the argument. Argument construction is captured by a generalized argument consequence relation based on the ^,--fragment of minimal logic. Arguments can be aggregated by a variety of numeric and symbolic flattening functions. This approach appears to shed light on the common logical structure of a variety of quantitative, qualitative and defeasible uncertainty calculi. ",Argumentation as a General Framework for Uncertain Reasoning " In this paper some initial work towards a new approach to qualitative reasoning under uncertainty is presented. This method is not only applicable to qualitative probabilistic reasoning, as is the case with other methods, but also allows the qualitative propagation within networks of values based upon possibility theory and Dempster-Shafer evidence theory. The method is applied to two simple networks from which a large class of directed graphs may be constructed. The results of this analysis are used to compare the qualitative behaviour of the three major quantitative uncertainty handling formalisms, and to demonstrate that the qualitative integration of the formalisms is possible under certain assumptions. ",On reasoning in networks with qualitative uncertainty " This paper introduces a qualitative measure of ambiguity and analyses its relationship with other measures of uncertainty. Probability measures relative likelihoods, while ambiguity measures vagueness surrounding those judgments. Ambiguity is an important representation of uncertain knowledge. It deals with a different, type of uncertainty modeled by subjective probability or belief. ",Qualitative Measures of Ambiguity " Shafer's theory of belief and the Bayesian theory of probability are two alternative and mutually inconsistent approaches toward modelling uncertainty in artificial intelligence. To help reduce the conflict between these two approaches, this paper reexamines expected utility theory-from which Bayesian probability theory is derived. Expected utility theory requires the decision maker to assign a utility to each decision conditioned on every possible event that might occur. But frequently the decision maker cannot foresee all the events that might occur, i.e., one of the possible events is the occurrence of an unforeseen event. So once we acknowledge the existence of unforeseen events, we need to develop some way of assigning utilities to decisions conditioned on unforeseen events. The commonsensical solution to this problem is to assign similar utilities to events which are similar. Implementing this commonsensical solution is equivalent to replacing Bayesian subjective probabilities over the space of foreseen and unforeseen events by random set theory probabilities over the space of foreseen events. This leads to an expected utility principle in which normalized variants of Shafer's commonalities play the role of subjective probabilities. Hence allowing for unforeseen events in decision analysis causes Bayesian probability theory to become much more similar to Shaferian theory. ","A Bayesian Variant of Shafer's Commonalities For Modelling Unforeseen Events" " We present a semantics for adding uncertainty to conditional logics for default reasoning and belief revision. We are able to treat conditional sentences as statements of conditional probability, and express rules for revision such as ""If A were believed, then B would be believed to degree p."" This method of revision extends conditionalization by allowing meaningful revision by sentences whose probability is zero. This is achieved through the use of counterfactual probabilities. Thus, our system accounts for the best properties of qualitative methods of update (in particular, the AGM theory of revision) and probabilistic methods. We also show how our system can be viewed as a unification of probability theory and possibility theory, highlighting their orthogonality and providing a means for expressing the probability of a possibility. We also demonstrate the connection to Lewis's method of imaging. ",The Probability of a Possibility: Adding Uncertainty to Default Rules " A key issue in the handling of temporal data is the treatment of persistence; in most approaches it consists in inferring defeasible confusions by extrapolating from the actual knowledge of the history of the world; we propose here a gradual modelling of persistence, following the idea that persistence is decreasing (the further we are from the last time point where a fluent is known to be true, the less certainly true the fluent is); it is based on possibility theory, which has strong relations with other well-known ordering-based approaches to nonmonotonic reasoning. We compare our approach with Dean and Kanazawa's probabilistic projection. We give a formal modelling of the decreasing persistence problem. Lastly, we show how to infer nonmonotonic conclusions using the principle of decreasing persistence. ",Possibilistic decreasing persistence " Evidential reasoning is now a leading topic in Artificial Intelligence. Evidence is represented by a variety of evidential functions. Evidential reasoning is carried out by certain kinds of fundamental operation on these functions. This paper discusses two of the basic operations on evidential functions, the discount operation and the well-known orthogonal sum operation. We show that the discount operation is not commutative with the orthogonal sum operation, and derive expressions for the two operations applied to the various evidential function. ",Discounting and Combination Operations in Evidential Reasoning " The classical propositional assumption-based model is extended to incorporate probabilities for the assumptions. Then it is placed into the framework of evidence theory. Several authors like Laskey, Lehner (1989) and Provan (1990) already proposed a similar point of view, but the first paper is not as much concerned with mathematical foundations, and Provan's paper develops into a different direction. Here we thoroughly develop and present the mathematical foundations of this theory, together with computational methods adapted from Reiter, De Kleer (1987) and Inoue (1992). Finally, recently proposed techniques for computing degrees of support are presented. ",Probabilistic Assumption-Based Reasoning " This paper presents a procedure to determine a complete belief function from the known values of belief for some of the subsets of the frame of discerment. The method is based on the principle of minimum commitment and a new principle called the focusing principle. This additional principle is based on the idea that belief is specified for the most relevant sets: the focal elements. The resulting procedure is compared with existing methods of building complete belief functions: the minimum specificity principle and the least commitment principle. ",Partially Specified Belief Functions " Jeffrey's rule of conditioning has been proposed in order to revise a probability measure by another probability function. We generalize it within the framework of the models based on belief functions. We show that several forms of Jeffrey's conditionings can be defined that correspond to the geometrical rule of conditioning and to Dempster's rule of conditioning, respectively. ",Jeffrey's rule of conditioning generalized to belief functions " In this paper, the concept of possibilistic evidence which is a possibility distribution as well as a body of evidence is proposed over an infinite universe of discourse. The inference with possibilistic evidence is investigated based on a unified inference framework maintaining both the compatibility of concepts and the consistency of the probability logic. ",Inference with Possibilistic Evidence " An elaboration of Dempster's method of constructing belief functions suggests a broadly applicable strategy for constructing lower probabilities under a variety of evidentiary constraints. ",Constructing Lower Probabilities " In a probability-based reasoning system, Bayes' theorem and its variations are often used to revise the system's beliefs. However, if the explicit conditions and the implicit conditions of probability assignments `me properly distinguished, it follows that Bayes' theorem is not a generally applicable revision rule. Upon properly distinguishing belief revision from belief updating, we see that Jeffrey's rule and its variations are not revision rules, either. Without these distinctions, the limitation of the Bayesian approach is often ignored or underestimated. Revision, in its general form, cannot be done in the Bayesian approach, because a probability distribution function alone does not contain the information needed by the operation. ",Belief Revision in Probability Theory " This paper examines the concept of a combination rule for belief functions. It is shown that two fairly simple and apparently reasonable assumptions determine Dempster's rule, giving a new justification for it. ",The Assumptions Behind Dempster's Rule " In this paper, we present a decision support system based on belief functions and the pignistic transformation. The system is an integration of an evidential system for belief function propagation and a valuation-based system for Bayesian decision analysis. The two subsystems are connected through the pignistic transformation. The system takes as inputs the user's ""gut feelings"" about a situation and suggests what, if any, are to be tested and in what order, and it does so with a user friendly interface. ",A Belief-Function Based Decision Support System " This paper provides an overview of the SP theory of intelligence and its central idea that artificial intelligence, mainstream computing, and much of human perception and cognition, may be understood as information compression. The background and origins of the SP theory are described, and the main elements of the theory, including the key concept of multiple alignment, borrowed from bioinformatics but with important differences. Associated with the SP theory is the idea that redundancy in information may be understood as repetition of patterns, that compression of information may be achieved via the matching and unification (merging) of patterns, and that computing and information compression are both fundamentally probabilistic. It appears that the SP system is Turing-equivalent in the sense that anything that may be computed with a Turing machine may, in principle, also be computed with an SP machine. One of the main strengths of the SP theory and the multiple alignment concept is in modelling concepts and phenomena in artificial intelligence. Within that area, the SP theory provides a simple but versatile means of representing different kinds of knowledge, it can model both the parsing and production of natural language, with potential for the understanding and translation of natural languages, it has strengths in pattern recognition, with potential in computer vision, it can model several kinds of reasoning, and it has capabilities in planning, problem solving, and unsupervised learning. The paper includes two examples showing how alternative parsings of an ambiguous sentence may be modelled as multiple alignments, and another example showing how the concept of multiple alignment may be applied in medical diagnosis. ",Computing as compression: the SP theory of intelligence " This paper presents capabilities of using genetic algorithms to find approximations of function extrema, which cannot be found using analytic ways. To enhance effectiveness of calculations, algorithm has been parallelized using OpenMP library. We gained much increase in speed on platforms using multithreaded processors with shared memory free access. During analysis we used different modifications of genetic operator, using them we obtained varied evolution process of potential solutions. Results allow to choose best methods among many applied in genetic algorithms and observation of acceleration on Yorkfield, Bloomfield, Westmere-EX and most recent Sandy Bridge cores. ","Generating extrema approximation of analytically incomputable functions through usage of parallel computer aided genetic algorithms" " Several algorithms have been proposed for discovering patterns from trajectories of moving objects, but only a few have concentrated on outlier detection. Existing approaches, in general, discover spatial outliers, and do not provide any further analysis of the patterns. In this paper we introduce semantic spatial and spatio-temporal outliers and propose a new algorithm for trajectory outlier detection. Semantic outliers are computed between regions of interest, where objects have similar movement intention, and there exist standard paths which connect the regions. We show with experiments on real data that the method finds semantic outliers from trajectory data that are not discovered by similar approaches. ","Discovering Semantic Spatial and Spatio-Temporal Outliers from Moving Object Trajectories" " Hepatitis C virus (HCV) is a widely spread disease all over the world. HCV has very high mutation rate that makes it resistant to antibodies. Modeling HCV to identify the virus mutation process is essential to its detection and predicting its evolution. This paper presents a model based framework for estimating mutation rate of HCV in two steps. Firstly profile hidden Markov model (PHMM) architecture was builder to select the sequences which represents sequence per year. Secondly mutation rate was calculated by using pair-wise distance method between sequences. A pilot study is conducted on NS5B zone of HCV dataset of genotype 4 subtype a (HCV4a) in Egypt. ","Model Based Framework for Estimating Mutation Rate of Hepatitis C Virus in Egypt" " In this paper we describe a novel method for evidential reasoning [1]. It involves modelling the process of evidential reasoning in three steps, namely, evidence structure construction, evidence accumulation, and decision making. The proposed method, called RES, is novel in that evidence strength is associated with an evidential support relationship (an argument) between a pair of statements and such strength is carried by comparison between arguments. This is in contrast to the onventional approaches, where evidence strength is represented numerically and is associated with a statement. ",RES - a Relative Method for Evidential Reasoning " An algorithm for generating the structure of a directed acyclic graph from data using the notion of causal input lists is presented. The algorithm manipulates the ordering of the variables with operations which very much resemble arc reversal. Operations are only applied if the DAG after the operation represents at least the independencies represented by the DAG before the operation until no more arcs can be removed from the DAG. The resulting DAG is a minimal l-map. ",Optimizing Causal Orderings for Generating DAGs from Data " Possibilistic logic has been proposed as a numerical formalism for reasoning with uncertainty. There has been interest in developing qualitative accounts of possibility, as well as an explanation of the relationship between possibility and modal logics. We present two modal logics that can be used to represent and reason with qualitative statements of possibility and necessity. Within this modal framework, we are able to identify interesting relationships between possibilistic logic, beliefs and conditionals. In particular, the most natural conditional definable via possibilistic means for default reasoning is identical to Pearl's conditional for e-semantics. ",Modal Logics for Qualitative Possibility and Beliefs " Influence diagram is a graphical representation of belief networks with uncertainty. This article studies the structural properties of a probabilistic model in an influence diagram. In particular, structural controllability theorems and structural observability theorems are developed and algorithms are formulated. Controllability and observability are fundamental concepts in dynamic systems (Luenberger 1979). Controllability corresponds to the ability to control a system while observability analyzes the inferability of its variables. Both properties can be determined by the ranks of the system matrices. Structural controllability and observability, on the other hand, analyze the property of a system with its structure only, without the specific knowledge of the values of its elements (tin 1974, Shields and Pearson 1976). The structural analysis explores the connection between the structure of a model and the functional dependence among its elements. It is useful in comprehending problem and formulating solution by challenging the underlying intuitions and detecting inconsistency in a model. This type of qualitative reasoning can sometimes provide insight even when there is insufficient numerical information in a model. ",Structural Controllability and Observability in Influence Diagrams " Experts do not always feel very, comfortable when they have to give precise numerical estimations of certainty degrees. In this paper we present a qualitative approach which allows for attaching partially ordered symbolic grades to logical formulas. Uncertain information is expressed by means of parameterized modal operators. We propose a semantics for this multimodal logic and give a sound and complete axiomatization. We study the links with related approaches and suggest how this framework might be used to manage both uncertain and incomplere knowledge. ",Lattice-Based Graded Logic: a Multimodal Approach " We have developed a probabilistic forecasting methodology through a synthesis of belief network models and classical time-series analysis. We present the dynamic network model (DNM) and describe methods for constructing, refining, and performing inference with this representation of temporal probabilistic knowledge. The DNM representation extends static belief-network models to more general dynamic forecasting models by integrating and iteratively refining contemporaneous and time-lagged dependencies. We discuss key concepts in terms of a model for forecasting U.S. car sales in Japan. ",Dynamic Network Models for Forecasting " We describe how we selectively reformulate portions of a belief network that pose difficulties for solution with a stochastic-simulation algorithm. With employ the selective conditioning approach to target specific nodes in a belief network for decomposition, based on the contribution the nodes make to the tractability of stochastic simulation. We review previous work on BNRAS algorithms- randomized approximation algorithms for probabilistic inference. We show how selective conditioning can be employed to reformulate a single BNRAS problem into multiple tractable BNRAS simulation problems. We discuss how we can use another simulation algorithm-logic sampling-to solve a component of the inference problem that provides a means for knitting the solutions of individual subproblems into a final result. Finally, we analyze tradeoffs among the computational subtasks associated with the selective conditioning approach to reformulation. ",Reformulating Inference Problems Through Selective Conditioning " The product expansion of conditional probabilities for belief nets is not maximum entropy. This appears to deny a desirable kind of assurance for the model. However, a kind of guarantee that is almost as strong as maximum entropy can be derived. Surprisingly, a variant model also exhibits the guarantee, and for many cases obtains a higher performance score than the product expansion. ",Entropy and Belief Networks " We report on an experimental investigation into opportunities for parallelism in beliefnet inference. Specifically, we report on a study performed of the available parallelism, on hypercube style machines, of a set of randomly generated belief nets, using factoring (SPI) style inference algorithms. Our results indicate that substantial speedup is available, but that it is available only through parallelization of individual conformal product operations, and depends critically on finding an appropriate factoring. We find negligible opportunity for parallelism at the topological, or clustering tree, level. ",Parallelizing Probabilistic Inference: Some Early Explorations " This paper introduces the notion of objection-based causal networks which resemble probabilistic causal networks except that they are quantified using objections. An objection is a logical sentence and denotes a condition under which a, causal dependency does not exist. Objection-based causal networks enjoy almost all the properties that make probabilistic causal networks popular, with the added advantage that objections are, arguably more intuitive than probabilities. ",Objection-Based Causal Networks " This paper investigates the possibility of performing automated reasoning in probabilistic logic when probabilities are expressed by means of linguistic quantifiers. Each linguistic term is expressed as a prescribed interval of proportions. Then instead of propagating numbers, qualitative terms are propagated in accordance with the numerical interpretation of these terms. The quantified syllogism, modelling the chaining of probabilistic rules, is studied in this context. It is shown that a qualitative counterpart of this syllogism makes sense, and is relatively independent of the threshold defining the linguistically meaningful intervals, provided that these threshold values remain in accordance with the intuition. The inference power is less than that of a full-fledged probabilistic con-quaint propagation device but better corresponds to what could be thought of as commonsense probabilistic reasoning. ",A Symbolic Approach to Reasoning with Linguistic Quantifiers " Data fusion allows the elaboration and the evaluation of a situation synthesized from low level informations provided by different kinds of sensors. The fusion of the collected data will result in fewer and higher level informations more easily assessed by a human operator and that will assist him effectively in his decision process. In this paper we present the suitability and the advantages of using a Possibilistic Assumption based Truth Maintenance System (n-ATMS) in a data fusion military application. We first describe the problem, the needed knowledge representation formalisms and problem solving paradigms. Then we remind the reader of the basic concepts of ATMSs, Possibilistic Logic and 11-ATMSs. Finally we detail the solution to the given data fusion problem and conclude with the results and comparison with a non-possibilistic solution. ","Possibilistic Assumption based Truth Maintenance System, Validation in a Data Fusion Application" " In the probabilistic approach to uncertainty management the input knowledge is usually represented by means of some probability distributions. In this paper we assume that the input knowledge is given by two discrete conditional probability distributions, represented by two stochastic matrices P and Q. The consistency of the knowledge base is analyzed. Coherence conditions and explicit formulas for the extension to marginal distributions are obtained in some special cases. ",Knowledge Integration for Conditional Probability Assessments " To date, most probabilistic reasoning systems have relied on a fixed belief network constructed at design time. The network is used by an application program as a representation of (in)dependencies in the domain. Probabilistic inference algorithms operate over the network to answer queries. Recognizing the inflexibility of fixed models has led researchers to develop automated network construction procedures that use an expressive knowledge base to generate a network that can answer a query. Although more flexible than fixed model approaches, these construction procedures separate construction and evaluation into distinct phases. In this paper we develop an approach to combining incremental construction and evaluation of a partial probability model. The combined method holds promise for improved methods for control of model construction based on a trade-off between fidelity of results and cost of construction. ",Integrating Model Construction and Evaluation " We recently described a formalism for reasoning with if-then rules that re expressed with different levels of firmness [18]. The formalism interprets these rules as extreme conditional probability statements, specifying orders of magnitude of disbelief, which impose constraints over possible rankings of worlds. It was shown that, once we compute a priority function Z+ on the rules, the degree to which a given query is confirmed or denied can be computed in O(log n`) propositional satisfiability tests, where n is the number of rules in the knowledge base. In this paper, we show that computing Z+ requires O(n2 X log n) satisfiability tests, not an exponential number as was conjectured in [18], which reduces to polynomial complexity in the case of Horn expressions. We also show how reasoning with imprecise observations can be incorporated in our formalism and how the popular notions of belief revision and epistemic entrenchment are embodied naturally and tractably. ",Reasoning With Qualitative Probabilities Can Be Tractable " A computational scheme for reasoning about dynamic systems using (causal) probabilistic networks is presented. The scheme is based on the framework of Lauritzen and Spiegelhalter (1988), and may be viewed as a generalization of the inference methods of classical time-series analysis in the sense that it allows description of non-linear, multivariate dynamic systems with complex conditional independence structures. Further, the scheme provides a method for efficient backward smoothing and possibilities for efficient, approximate forecasting methods. The scheme has been implemented on top of the HUGIN shell. ",A computational scheme for Reasoning in Dynamic Probabilistic Networks " The fundamental updating process in the transferable belief model is related to the concept of specialization and can be described by a specialization matrix. The degree of belief in the truth of a proposition is a degree of justified support. The Principle of Minimal Commitment implies that one should never give more support to the truth of a proposition than justified. We show that Dempster's rule of conditioning corresponds essentially to the least committed specialization, and that Dempster's rule of combination results essentially from commutativity requirements. The concept of generalization, dual to thc concept of specialization, is described. ","The Dynamic of Belief in the Transferable Belief Model and Specialization-Generalization Matrices" " A new entropy-like measure as well as a new measure of total uncertainty pertaining to the Dempster-Shafer theory are introduced. It is argued that these measures are better justified than any of the previously proposed candidates. ",A Note on the Measure of Discord " A number of writers(Joseph Halpern and Fahiem Bacchus among them) have offered semantics for formal languages in which inferences concerning probabilities can be made. Our concern is different. This paper provides a formalization of nonmonotonic inferences in which the conclusion is supported only to a certain degree. Such inferences are clearly 'invalid' since they must allow the falsity of a conclusion even when the premises are true. Nevertheless, such inferences can be characterized both syntactically and semantically. The 'premises' of probabilistic arguments are sets of statements (as in a database or knowledge base), the conclusions categorical statements in the language. We provide standards for both this form of inference, for which high probability is required, and for an inference in which the conclusion is qualified by an intermediate interval of support. ",Semantics for Probabilistic Inference " We discuss problems for convex Bayesian decision making and uncertainty representation. These include the inability to accommodate various natural and useful constraints and the possibility of an analog of the classical Dutch Book being made against an agent behaving in accordance with convex Bayesian prescriptions. A more general set-based Bayesianism may be as tractable and would avoid the difficulties we raise. ",Some Problems for Convex Bayesians " This paper presents a Bayesian framework for assessing the adequacy of a model without the necessity of explicitly enumerating a specific alternate model. A test statistic is developed for tracking the performance of the model across repeated problem instances. Asymptotic methods are used to derive an approximate distribution for the test statistic. When the model is rejected, the individual components of the test statistic can be used to guide search for an alternate model. ","Bayesian Meta-Reasoning: Determining Model Adequacy from Within a Small World" " The ideal Bayesian agent reasons from a global probability model, but real agents are restricted to simplified models which they know to be adequate only in restricted circumstances. Very little formal theory has been developed to help fallibly rational agents manage the process of constructing and revising small world models. The goal of this paper is to present a theoretical framework for analyzing model management approaches. For a probability forecasting problem, a search process over small world models is analyzed as an approximation to a larger-world model which the agent cannot explicitly enumerate or compute. Conditions are given under which the sequence of small-world models converges to the larger-world probabilities. ",The Bounded Bayesian " Automated decision making is often complicated by the complexity of the knowledge involved. Much of this complexity arises from the context sensitive variations of the underlying phenomena. We propose a framework for representing descriptive, context-sensitive knowledge. Our approach attempts to integrate categorical and uncertain knowledge in a network formalism. This paper outlines the basic representation constructs, examines their expressiveness and efficiency, and discusses the potential applications of the framework. ","Representing Context-Sensitive Knowledge in a Network Formalism: A Preliminary Report" " Bayesian networks are directed acyclic graphs representing independence relationships among a set of random variables. A random variable can be regarded as a set of exhaustive and mutually exclusive propositions. We argue that there are several drawbacks resulting from the propositional nature and acyclic structure of Bayesian networks. To remedy these shortcomings, we propose a probabilistic network where nodes represent unary predicates and which may contain directed cycles. The proposed representation allows us to represent domain knowledge in a single static network even though we cannot determine the instantiations of the predicates before hand. The ability to deal with cycles also enables us to handle cyclic causal tendencies and to recognize recursive plans. ",A Probabilistic Network of Predicates " The Dempster-Shafer theory of evidence has been used intensively to deal with uncertainty in knowledge-based systems. However the representation of uncertain relationships between evidence and hypothesis groups (heuristic knowledge) is still a major research problem. This paper presents an approach to representing such heuristic knowledge by evidential mappings which are defined on the basis of mass functions. The relationships between evidential mappings and multi valued mappings, as well as between evidential mappings and Bayesian multi- valued causal link models in Bayesian theory are discussed. Following this the detailed procedures for constructing evidential mappings for any set of heuristic rules are introduced. Several situations of belief propagation are discussed. ",Representing Heuristic Knowledge in D-S Theory " Bayes nets are relatively recent innovations. As a result, most of their theoretical development has focused on the simplest class of single-author models. The introduction of more sophisticated multiple-author settings raises a variety of interesting questions. One such question involves the nature of compromise and consensus. Posterior compromises let each model process all data to arrive at an independent response, and then split the difference. Prior compromises, on the other hand, force compromise to be reached on all points before data is observed. This paper introduces prior compromises in a Bayes net setting. It outlines the problem and develops an efficient algorithm for fusing two directed acyclic graphs into a single, consensus structure, which may then be used as the basis of a prior compromise. ",The Topological Fusion of Bayes Nets " In Moral, Campos (1991) and Cano, Moral, Verdegay-Lopez (1991) a new method of conditioning convex sets of probabilities has been proposed. The result of it is a convex set of non-necessarily normalized probability distributions. The normalizing factor of each probability distribution is interpreted as the possibility assigned to it by the conditioning information. From this, it is deduced that the natural value for the conditional probability of an event is a possibility distribution. The aim of this paper is to study methods of transforming this possibility distribution into a probability (or uncertainty) interval. These methods will be based on the use of Sugeno and Choquet integrals. Their behaviour will be compared in basis to some selected examples. ","Calculating Uncertainty Intervals From Conditional Convex Sets of Probabilities" " The trajectory of a robot is monitored in a restricted dynamic environment using light beam sensor data. We have a Dynamic Belief Network (DBN), based on a discrete model of the domain, which provides discrete monitoring analogous to conventional quantitative filter techniques. Sensor observations are added to the basic DBN in the form of specific evidence. However, sensor data is often partially or totally incorrect. We show how the basic DBN, which infers only an impossible combination of evidence, may be modified to handle specific types of incorrect data which may occur in the domain. We then present an extension to the DBN, the addition of an invalidating node, which models the status of the sensor as working or defective. This node provides a qualitative explanation of inconsistent data: it is caused by a defective sensor. The connection of successive instances of the invalidating node models the status of a sensor over time, allowing the DBN to handle both persistent and intermittent faults. ",Sensor Validation Using Dynamic Belief Networks " The paper describes aHUGIN, a tool for creating adaptive systems. aHUGIN is an extension of the HUGIN shell, and is based on the methods reported by Spiegelhalter and Lauritzen (1990a). The adaptive systems resulting from aHUGIN are able to adjust the C011ditional probabilities in the model. A short analysis of the adaptation task is given and the features of aHUGIN are described. Finally a session with experiments is reported and the results are discussed. ",aHUGIN: A System Creating Adaptive Causal Probabilistic Networks " Probabilistic reasoning systems combine different probabilistic rules and probabilistic facts to arrive at the desired probability values of consequences. In this paper we describe the MESA-algorithm (Maximum Entropy by Simulated Annealing) that derives a joint distribution of variables or propositions. It takes into account the reliability of probability values and can resolve conflicts between contradictory statements. The joint distribution is represented in terms of marginal distributions and therefore allows to process large inference networks and to determine desired probability values with high precision. The procedure derives a maximum entropy distribution subject to the given constraints. It can be applied to inference networks of arbitrary topology and may be extended into a number of directions. ",MESA: Maximum Entropy by Simulated Annealing " This paper describes some results of research on associate systems: knowledge-based systems that flexibly and adaptively support their human users in carrying out complex, time-dependent problem-solving tasks under uncertainty. Based on principles derived from decision theory and decision analysis, a problem-solving approach is presented which can overcome many of the limitations of traditional expert-systems. This approach implements an explicit model of the human user's problem-solving capabilities as an integral element in the overall problem solving architecture. This integrated model, represented as an influence diagram, is the basis for achieving adaptive task sharing behavior between the associate system and the human user. This associate system model has been applied toward ongoing research on a Mars Rover Manager's Associate (MRMA). MRMA's role would be to manage a small fleet of robotic rovers on the Martian surface. The paper describes results for a specific scenario where MRMA examines the benefits and costs of consulting human experts on Earth to assist a Mars rover with a complex resource management decision. ",Decision Methods for Adaptive Task-Sharing in Associate Systems " Although the notion of diagnostic problem has been extensively investigated in the context of static systems, in most practical applications the behavior of the modeled system is significantly variable during time. The goal of the paper is to propose a novel approach to the modeling of uncertainty about temporal evolutions of time-varying systems and a characterization of model-based temporal diagnosis. Since in most real world cases knowledge about the temporal evolution of the system to be diagnosed is uncertain, we consider the case when probabilistic temporal knowledge is available for each component of the system and we choose to model it by means of Markov chains. In fact, we aim at exploiting the statistical assumptions underlying reliability theory in the context of the diagnosis of timevarying systems. We finally show how to exploit Markov chain theory in order to discard, in the diagnostic process, very unlikely diagnoses. ",Modeling Uncertain Temporal Evolutions in Model-Based Diagnosis " An expert classification system having statistical information about the prior probabilities of the different classes should be able to use this knowledge to reduce the amount of additional information that it must collect, e.g., through questions, in order to make a correct classification. This paper examines how best to use such prior information and additional information-collection opportunities to reduce uncertainty about the class to which a case belongs, thus minimizing the average cost or effort required to correctly classify new cases. ","Guess-And-Verify Heuristics for Reducing Uncertainties in Expert Classification Systems" " This paper describes the architecture of R&D Analyst, a commercial intelligent decision system for evaluating corporate research and development projects and portfolios. In analyzing projects, R&D Analyst interactively guides a user in constructing an influence diagram model for an individual research project. The system's interactive approach can be clearly explained from a blackboard system perspective. The opportunistic reasoning emphasis of blackboard systems satisfies the flexibility requirements of model construction, thereby suggesting that a similar architecture would be valuable for developing normative decision systems in other domains. Current research is aimed at extending the system architecture to explicitly consider of sequential decisions involving limited temporal, financial, and physical resources. ","R&D Analyst: An Interactive Approach to Normative Decision System Model Construction" " Many AI synthesis problems such as planning or scheduling may be modelized as constraint satisfaction problems (CSP). A CSP is typically defined as the problem of finding any consistent labeling for a fixed set of variables satisfying all given constraints between these variables. However, for many real tasks such as job-shop scheduling, time-table scheduling, design?, all these constraints have not the same significance and have not to be necessarily satisfied. A first distinction can be made between hard constraints, which every solution should satisfy and soft constraints, whose satisfaction has not to be certain. In this paper, we formalize the notion of possibilistic constraint satisfaction problems that allows the modeling of uncertainly satisfied constraints. We use a possibility distribution over labelings to represent respective possibilities of each labeling. Necessity-valued constraints allow a simple expression of the respective certainty degrees of each constraint. The main advantage of our approach is its integration in the CSP technical framework. Most classical techniques, such as Backtracking (BT), arcconsistency enforcing (AC) or Forward Checking have been extended to handle possibilistics CSP and are effectively implemented. The utility of our approach is demonstrated on a simple design problem. ","Possibilistic Constraint Satisfaction Problems or ""How to handle soft constraints?""" " The analysis of decision making under uncertainty is closely related to the analysis of probabilistic inference. Indeed, much of the research into efficient methods for probabilistic inference in expert systems has been motivated by the fundamental normative arguments of decision theory. In this paper we show how the developments underlying those efficient methods can be applied immediately to decision problems. In addition to general approaches which need know nothing about the actual probabilistic inference method, we suggest some simple modifications to the clustering family of algorithms in order to efficiently incorporate decision making capabilities. ",Decision Making Using Probabilistic Inference Methods " This paper introduces the notions of independence and conditional independence in valuation-based systems (VBS). VBS is an axiomatic framework capable of representing many different uncertainty calculi. We define independence and conditional independence in terms of factorization of the joint valuation. The definitions of independence and conditional independence in VBS generalize the corresponding definitions in probability theory. Our definitions apply not only to probability theory, but also to Dempster-Shafer's belief-function theory, Spohn's epistemic-belief theory, and Zadeh's possibility theory. In fact, they apply to any uncertainty calculi that fit in the framework of valuation-based systems. ",Conditional Independence in Uncertainty Theories " Within the transferable belief model, positive basic belief masses can be allocated to the empty set, leading to unnormalized belief functions. The nature of these unnormalized beliefs is analyzed. ","The Nature of the Unnormalized Beliefs Encountered in the Transferable Belief Model" " The general use of subjective probabilities to model belief has been justified using many axiomatic schemes. For example, ?consistent betting behavior' arguments are well-known. To those not already convinced of the unique fitness and generality of probability models, such justifications are often unconvincing. The present paper explores another rationale for probability models. ?Qualitative probability,' which is known to provide stringent constraints on belief representation schemes, is derived from five simple assumptions about relationships among beliefs. While counterparts of familiar rationality concepts such as transitivity, dominance, and consistency are used, the betting context is avoided. The gap between qualitative probability and probability proper can be bridged by any of several additional assumptions. The discussion here relies on results common in the recent AI literature, introducing a sixth simple assumption. The narrative emphasizes models based on unique complete orderings, but the rationale extends easily to motivate set-valued representations of partial orderings as well. ",Intuitions about Ordered Beliefs Leading to Probabilistic Models " Bayesian networks have been used extensively in diagnostic tasks such as medicine, where they represent the dependency relations between a set of symptoms and a set of diseases. A criticism of this type of knowledge representation is that it is restricted to this kind of task, and that it cannot cope with the knowledge required in other artificial intelligence applications. For example, in computer vision, we require the ability to model complex knowledge, including temporal and relational factors. In this paper we extend Bayesian networks to model relational and temporal knowledge for high-level vision. These extended networks have a simple structure which permits us to propagate probability efficiently. We have applied them to the domain of endoscopy, illustrating how the general modelling principles can be used in specific cases. ","Expressing Relational and Temporal Knowledge in Visual Probabilistic Networks" " This paper discusses a target tracking problem in which no dynamic mathematical model is explicitly assumed. A nonlinear filter based on the fuzzy If-then rules is developed. A comparison with a Kalman filter is made, and empirical results show that the performance of the fuzzy filter is better. Intensive simulations suggest that theoretical justification of the empirical results is possible. ",A Fuzzy Logic Approach to Target Tracking " The DUCK-calculus presented here is a recent approach to cope with probabilistic uncertainty in a sound and efficient way. Uncertain rules with bounds for probabilities and explicit conditional independences can be maintained incrementally. The basic inference mechanism relies on local bounds propagation, implementable by deductive databases with a bottom-up fixpoint evaluation. In situations, where no precise bounds are deducible, it can be combined with simple operations research techniques on a local scope. In particular, we provide new precise analytical bounds for probabilistic entailment. ",Towards Precision of Probabilistic Bounds Propagation " In a previous paper [Pearl and Verma, 1991] we presented an algorithm for extracting causal influences from independence information, where a causal influence was defined as the existence of a directed arc in all minimal causal models consistent with the data. In this paper we address the question of deciding whether there exists a causal model that explains ALL the observed dependencies and independencies. Formally, given a list M of conditional independence statements, it is required to decide whether there exists a directed acyclic graph (dag) D that is perfectly consistent with M, namely, every statement in M, and no other, is reflected via dseparation in D. We present and analyze an effective algorithm that tests for the existence of such a day, and produces one, if it exists. ","An Algorithm for Deciding if a Set of Observed Independencies Has a Causal Explanation" " Jeffrey's rule has been generalized by Wagner to the case in which new evidence bounds the possible revisions of a prior probability below by a Dempsterian lower probability. Classical probability kinematics arises within this generalization as the special case in which the evidentiary focal elements of the bounding lower probability are pairwise disjoint. We discuss a twofold extension of this generalization, first allowing the lower bound to be any two-monotone capacity and then allowing the prior to be a lower envelope. ",Generalizing Jeffrey Conditionalization " In this paper, a unified framework for representing uncertain information based on the notion of an interval structure is proposed. It is shown that the lower and upper approximations of the rough-set model, the lower and upper bounds of incidence calculus, and the belief and plausibility functions all obey the axioms of an interval structure. An interval structure can be used to synthesize the decision rules provided by the experts. An efficient algorithm to find the desirable set of rules is developed from a set of sound and complete inference axioms. ",Interval Structure: A Framework for Representing Uncertain Information " Current Bayesian net representations do not consider structure in the domain and include all variables in a homogeneous network. At any time, a human reasoner in a large domain may direct his attention to only one of a number of natural subdomains, i.e., there is ?localization' of queries and evidence. In such a case, propagating evidence through a homogeneous network is inefficient since the entire network has to be updated each time. This paper presents multiply sectioned Bayesian networks that enable a (localization preserving) representation of natural subdomains by separate Bayesian subnets. The subnets are transformed into a set of permanent junction trees such that evidential reasoning takes place at only one of them at a time. Probabilities obtained are identical to those that would be obtained from the homogeneous network. We discuss attention shift to a different junction tree and propagation of previously acquired evidence. Although the overall system can be large, computational requirements are governed by the size of only one junction tree. ",Exploring Localization in Bayesian Networks for Large Expert Systems " Valuation-based system (VBS) provides a general framework for representing knowledge and drawing inferences under uncertainty. Recent studies have shown that the semantics of VBS can represent and solve Bayesian decision problems (Shenoy, 1991a). The purpose of this paper is to propose a decision calculus for Dempster-Shafer (D-S) theory in the framework of VBS. The proposed calculus uses a weighting factor whose role is similar to the probabilistic interpretation of an assumption that disambiguates decision problems represented with belief functions (Strat 1990). It will be shown that with the presented calculus, if the decision problems are represented in the valuation network properly, we can solve the problems by using fusion algorithm (Shenoy 1991a). It will also be shown the presented decision calculus can be reduced to the calculus for Bayesian probability theory when probabilities, instead of belief functions, are given. ",A Decision Calculus for Belief Functions in Valuation-Based Systems " This paper presents a new approach for computing posterior probabilities in Bayesian nets, which sidesteps the triangulation problem. The current state of art is the clique tree propagation approach. When the underlying graph of a Bayesian net is triangulated, this approach arranges its cliques into a tree and computes posterior probabilities by appropriately passing around messages in that tree. The computation in each clique is simply direct marginalization. When the underlying graph is not triangulated, one has to first triangulated it by adding edges. Referred to as the triangulation problem, the problem of finding an optimal or even a ?good? triangulation proves to be difficult. In this paper, we propose to first decompose a Bayesian net into smaller components by making use of Tarjan's algorithm for decomposing an undirected graph at all its minimal complete separators. Then, the components are arranged into a tree and posterior probabilities are computed by appropriately passing around messages in that tree. The computation in each component is carried out by repeating the whole procedure from the beginning. Thus the triangulation problem is sidestepped. ",Sidestepping the Triangulation Problem in Bayesian Net Computations " VT (Viterbi training), or hard EM, is an efficient way of parameter learning for probabilistic models with hidden variables. Given an observation $y$, it searches for a state of hidden variables $x$ that maximizes $p(x,y \mid \theta)$ by coordinate ascent on parameters $\theta$ and $x$. In this paper we introduce VT to PRISM, a logic-based probabilistic modeling system for generative models. VT improves PRISM in three ways. First VT in PRISM converges faster than EM in PRISM due to the VT's termination condition. Second, parameters learned by VT often show good prediction performance compared to those learned by EM. We conducted two parsing experiments with probabilistic grammars while learning parameters by a variety of inference methods, i.e.\ VT, EM, MAP and VB. The result is that VT achieved the best parsing accuracy among them in both experiments. Also we conducted a similar experiment for classification tasks where a hidden variable is not a prediction target unlike probabilistic grammars. We found that in such a case VT does not necessarily yield superior performance. Third since VT always deals with a single probability of a single explanation, Viterbi explanation, the exclusiveness condition that is imposed on PRISM programs is no more required if we learn parameters by VT. Last but not least we can say that as VT in PRISM is general and applicable to any PRISM program, it largely reduces the need for the user to develop a specific VT algorithm for a specific model. Furthermore since VT in PRISM can be used just by setting a PRISM flag appropriately, it makes VT easily accessible to (probabilistic) logic programmers. To appear in Theory and Practice of Logic Programming (TPLP). ",Viterbi training in PRISM " This paper examines the interdependence generated between two parent nodes with a common instantiated child node, such as two hypotheses sharing common evidence. The relation so generated has been termed ""intercausal."" It is shown by construction that inter-causal independence is possible for binary distributions at one state of evidence. For such ""CICI"" distributions, the two measures of inter-causal effect, ""multiplicative synergy"" and ""additive synergy"" are equal. The well known ""noisy-or"" model is an example of such a distribution. This introduces novel semantics for the noisy-or, as a model of the degree of conflict among competing hypotheses of a common observation. ","""Conditional Inter-Causally Independent"" Node Distributions, a Property of ""Noisy-Or"" Models" " The way experts manage uncertainty usually changes depending on the task they are performing. This fact has lead us to consider the problem of communicating modules (task implementations) in a large and structured knowledge based system when modules have different uncertainty calculi. In this paper, the analysis of the communication problem is made assuming that (i) each uncertainty calculus is an inference mechanism defining an entailment relation, and therefore the communication is considered to be inference-preserving, and (ii) we restrict ourselves to the case which the different uncertainty calculi are given by a class of truth functional Multiple-valued Logics. ",Combining Multiple-Valued Logics in Modular Expert Systems " An approach to reasoning with default rules where the proportion of exceptions, or more generally the probability of encountering an exception, can be at least roughly assessed is presented. It is based on local uncertainty propagation rules which provide the best bracketing of a conditional probability of interest from the knowledge of the bracketing of some other conditional probabilities. A procedure that uses two such propagation rules repeatedly is proposed in order to estimate any simple conditional probability of interest from the available knowledge. The iterative procedure, that does not require independence assumptions, looks promising with respect to the linear programming method. Improved bounds for conditional probabilities are given when independence assumptions hold. ",Constraint Propagation with Imprecise Conditional Probabilities " This paper presents a plausible reasoning system to illustrate some broad issues in knowledge representation: dualities between different reasoning forms, the difficulty of unifying complementary reasoning styles, and the approximate nature of plausible reasoning. These issues have a common underlying theme: there should be an underlying belief calculus of which the many different reasoning forms are special cases, sometimes approximate. The system presented allows reasoning about defaults, likelihood, necessity and possibility in a manner similar to the earlier work of Adams. The system is based on the belief calculus of subjective Bayesian probability which itself is based on a few simple assumptions about how belief should be manipulated. Approximations, semantics, consistency and consequence results are presented for the system. While this puts these often discussed plausible reasoning forms on a probabilistic footing, useful application to practical problems remains an issue. ",Some Properties of Plausible Reasoning " Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced to an incremental learning task as follows: the learning system is initially primed with a partial theory supplied by a domain expert, and thereafter maintains its own internal representation of alternative theories which is able to be interrogated by the domain expert and able to be incrementally refined from data. Algorithms for refinement of Bayesian networks are presented to illustrate what is meant by ""partial theory"", ""alternative theory representation"", etc. The algorithms are an incremental variant of batch learning algorithms from the literature so can work well in batch and incremental mode. ",Theory Refinement on Bayesian Networks " In this paper, we consider several types of information and methods of combination associated with incomplete probabilistic systems. We discriminate between 'a priori' and evidential information. The former one is a description of the whole population, the latest is a restriction based on observations for a particular case. Then, we propose different combination methods for each one of them. We also consider conditioning as the heterogeneous combination of 'a priori' and evidential information. The evidential information is represented as a convex set of likelihood functions. These will have an associated possibility distribution with behavior according to classical Possibility Theory. ",Combination of Upper and Lower Probabilities " Useless paths are a chronic problem for marker-passing techniques. We use a probabilistic analysis to justify a method for quickly identifying and rejecting useless paths. Using the same analysis, we identify key conditions and assumptions necessary for marker-passing to perform well. ","A Probabilistic Analysis of Marker-Passing Techniques for Plan-Recognition" " Research on Symbolic Probabilistic Inference (SPI) [2, 3] has provided an algorithm for resolving general queries in Bayesian networks. SPI applies the concept of dependency directed backward search to probabilistic inference, and is incremental with respect to both queries and observations. Unlike traditional Bayesian network inferencing algorithms, SPI algorithm is goal directed, performing only those calculations that are required to respond to queries. Research to date on SPI applies to Bayesian networks with discrete-valued variables and does not address variables with continuous values. In this papers, we extend the SPI algorithm to handle Bayesian networks made up of continuous variables where the relationships between the variables are restricted to be ?linear gaussian?. We call this variation of the SPI algorithm, SPI Continuous (SPIC). SPIC modifies the three basic SPI operations: multiplication, summation, and substitution. However, SPIC retains the framework of the SPI algorithm, namely building the search tree and recursive query mechanism and therefore retains the goal-directed and incrementality features of SPI. ",Symbolic Probabilistic Inference with Continuous Variables " Recent research on the Symbolic Probabilistic Inference (SPI) algorithm[2] has focused attention on the importance of resolving general queries in Bayesian networks. SPI applies the concept of dependency-directed backward search to probabilistic inference, and is incremental with respect to both queries and observations. In response to this research we have extended the evidence potential algorithm [3] with the same features. We call the extension symbolic evidence potential inference (SEPI). SEPI like SPI can handle generic queries and is incremental with respect to queries and observations. While in SPI, operations are done on a search tree constructed from the nodes of the original network, in SEPI, a clique-tree structure obtained from the evidence potential algorithm [3] is the basic framework for recursive query processing. In this paper, we describe the systematic query and caching procedure of SEPI. SEPI begins with finding a clique tree from a Bayesian network-the standard procedure of the evidence potential algorithm. With the clique tree, various probability distributions are computed and stored in each clique. This is the ?pre-processing? step of SEPI. Once this step is done, the query can then be computed. To process a query, a recursive process similar to the SPI algorithm is used. The queries are directed to the root clique and decomposed into queries for the clique's subtrees until a particular query can be answered at the clique at which it is directed. The algorithm and the computation are simple. The SEPI algorithm will be presented in this paper along with several examples. ",Symbolic Probabilistic Inference with Evidence Potential " This paper presents a Bayesian method for constructing Bayesian belief networks from a database of cases. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. We relate the methods in this paper to previous work, and we discuss open problems. ","A Bayesian Method for Constructing Bayesian Belief Networks from Databases" " We present a generalization of the local expression language used in the Symbolic Probabilistic Inference (SPI) approach to inference in belief nets [1l, [8]. The local expression language in SPI is the language in which the dependence of a node on its antecedents is described. The original language represented the dependence as a single monolithic conditional probability distribution. The extended language provides a set of operators (*, +, and -) which can be used to specify methods for combining partial conditional distributions. As one instance of the utility of this extension, we show how this extended language can be used to capture the semantics, representational advantages, and inferential complexity advantages of the ""noisy or"" relationship. ","Local Expression Languages for Probabilistic Dependence: a Preliminary Report" " The ability to reason under uncertainty and with incomplete information is a fundamental requirement of decision support technology. In this paper we argue that the concentration on theoretical techniques for the evaluation and selection of decision options has distracted attention from many of the wider issues in decision making. Although numerical methods of reasoning under uncertainty have strong theoretical foundations, they are representationally weak and only deal with a small part of the decision process. Knowledge based systems, on the other hand, offer greater flexibility but have not been accompanied by a clear decision theory. We describe here work which is under way towards providing a theoretical framework for symbolic decision procedures. A central proposal is an extended form of inference which we call argumentation; reasoning for and against decision options from generalised domain theories. The approach has been successfully used in several decision support applications, but it is argued that a comprehensive decision theory must cover autonomous decision making, where the agent can formulate questions as well as take decisions. A major theoretical challenge for this theory is to capture the idea of reflection to permit decision agents to reason about their goals, what they believe and why, and what they need to know or do in order to achieve their goals. ",Symbolic Decision Theory and Autonomous Systems " A reason maintenance system which extends an ATMS through Mukaidono's fuzzy logic is described. It supports a problem solver in situations affected by incomplete information and vague data, by allowing nonmonotonic inferences and the revision of previous conclusions when contradictions are detected. ",A Reason Maintenace System Dealing with Vague Data " This paper discuses multiple Bayesian networks representation paradigms for encoding asymmetric independence assertions. We offer three contributions: (1) an inference mechanism that makes explicit use of asymmetric independence to speed up computations, (2) a simplified definition of similarity networks and extensions of their theory, and (3) a generalized representation scheme that encodes more types of asymmetric independence assertions than do similarity networks. ",Advances in Probabilistic Reasoning " In this paper, we consider one aspect of the problem of applying decision theory to the design of agents that learn how to make decisions under uncertainty. This aspect concerns how an agent can estimate probabilities for the possible states of the world, given that it only makes limited observations before committing to a decision. We show that the naive application of statistical tools can be improved upon if the agent can determine which of his observations are truly relevant to the estimation problem at hand. We give a framework in which such determinations can be made, and define an estimation procedure to use them. Our framework also suggests several extensions, which show how additional knowledge can be used to improve tile estimation procedure still further. ",Probability Estimation in Face of Irrelevant Information " Value-of-information analyses provide a straightforward means for selecting the best next observation to make, and for determining whether it is better to gather additional information or to act immediately. Determining the next best test to perform, given a state of uncertainty about the world, requires a consideration of the value of making all possible sequences of observations. In practice, decision analysts and expert-system designers have avoided the intractability of exact computation of the value of information by relying on a myopic approximation. Myopic analyses are based on the assumption that only one additional test will be performed, even when there is an opportunity to make a large number of observations. We present a nonmyopic approximation for value of information that bypasses the traditional myopic analyses by exploiting the statistical properties of large samples. ",An Approximate Nonmyopic Computation for Value of Information " Since exact probabilistic inference is intractable in general for large multiply connected belief nets, approximate methods are required. A promising approach is to use heuristic search among hypotheses (instantiations of the network) to find the most probable ones, as in the TopN algorithm. Search is based on the relative probabilities of hypotheses which are efficient to compute. Given upper and lower bounds on the relative probability of partial hypotheses, it is possible to obtain bounds on the absolute probabilities of hypotheses. Best-first search aimed at reducing the maximum error progressively narrows the bounds as more hypotheses are examined. Here, qualitative probabilistic analysis is employed to obtain bounds on the relative probability of partial hypotheses for the BN20 class of networks networks and a generalization replacing the noisy OR assumption by negative synergy. The approach is illustrated by application to a very large belief network, QMR-BN, which is a reformulation of the Internist-1 system for diagnosis in internal medicine. ","Search-based Methods to Bound Diagnostic Probabilities in Very Large Belief Nets" " We discuss representing and reasoning with knowledge about the time-dependent utility of an agent's actions. Time-dependent utility plays a crucial role in the interaction between computation and action under bounded resources. We present a semantics for time-dependent utility and describe the use of time-dependent information in decision contexts. We illustrate our discussion with examples of time-pressured reasoning in Protos, a system constructed to explore the ideal control of inference by reasoners with limit abilities. ",Time-Dependent Utility and Action Under Uncertainty " Traditional approaches to non-monotonic reasoning fail to satisfy a number of plausible axioms for belief revision and suffer from conceptual difficulties as well. Recent work on ranked preferential models (RPMs) promises to overcome some of these difficulties. Here we show that RPMs are not adequate to handle iterated belief change. Specifically, we show that RPMs do not always allow for the reversibility of belief change. This result indicates the need for numerical strengths of belief. ",Non-monotonic Reasoning and the Reversibility of Belief Change " We motivate and describe a theory of belief in this paper. This theory is developed with the following view of human belief in mind. Consider the belief that an event E will occur (or has occurred or is occurring). An agent either entertains this belief or does not entertain this belief (i.e., there is no ""grade"" in entertaining the belief). If the agent chooses to exercise ""the will to believe"" and entertain this belief, he/she/it is entitled to a degree of confidence c (1 > c > 0) in doing so. Adopting this view of human belief, we conjecture that whenever an agent entertains the belief that E will occur with c degree of confidence, the agent will be surprised (to the extent c) upon realizing that E did not occur. ",Belief and Surprise - A Belief-Function Formulation " The categorial approach to evidential reasoning can be seen as a combination of the probability kinematics approach of Richard Jeffrey (1965) and the maximum (cross-) entropy inference approach of E. T. Jaynes (1957). As a consequence of that viewpoint, it is well known that category theory provides natural definitions for logical connectives. In particular, disjunction and conjunction are modelled by general categorial constructions known as products and coproducts. In this paper, I focus mainly on Dempster-Shafer theory of belief functions for which I introduce a category I call Dempster?s category. I prove the existence of and give explicit formulas for conjunction and disjunction in the subcategory of separable belief functions. In Dempster?s category, the new defined conjunction can be seen as the most cautious conjunction of beliefs, and thus no assumption about distinctness (of the sources) of beliefs is needed as opposed to Dempster?s rule of combination, which calls for distinctness (of the sources) of beliefs. ","Evidential Reasoning in a Categorial Perspective: Conjunction and Disjunction of Belief Functions" " The concept of movable evidence masses that flow from supersets to subsets as specified by experts represents a suitable framework for reasoning under uncertainty. The mass flow is controlled by specialization matrices. New evidence is integrated into the frame of discernment by conditioning or revision (Dempster's rule of conditioning), for which special specialization matrices exist. Even some aspects of non-monotonic reasoning can be represented by certain specialization matrices. ",Reasoning with Mass Distributions " Any probabilistic model of a problem is based on assumptions which, if violated, invalidate the model. Users of probability based decision aids need to be alerted when cases arise that are not covered by the aid's model. Diagnosis of model failure is also necessary to control dynamic model construction and revision. This paper presents a set of decision theoretically motivated heuristics for diagnosing situations in which a model is likely to provide an inadequate representation of the process being modeled. ",Conflict and Surprise: Heuristics for Model Revision " A series of monte carlo studies were performed to compare the behavior of some alternative procedures for reasoning under uncertainty. The behavior of several Bayesian, linear model and default reasoning procedures were examined in the context of increasing levels of calibration error. The most interesting result is that Bayesian procedures tended to output more extreme posterior belief values (posterior beliefs near 0.0 or 1.0) than other techniques, but the linear models were relatively less likely to output strong support for an erroneous conclusion. Also, accounting for the probabilistic dependencies between evidence items was important for both Bayesian and linear updating procedures. ",Reasoning under Uncertainty: Some Monte Carlo Results " This paper outlines a methodology for analyzing the representational support for knowledge-based decision-modeling in a broad domain. A relevant set of inference patterns and knowledge types are identified. By comparing the analysis results to existing representations, some insights are gained into a design approach for integrating categorical and uncertain knowledge in a context sensitive manner. ",Representation Requirements for Supporting Decision Model Formulation " When a planner must decide whether it has enough evidence to make a decision based on probability, it faces the sample size problem. Current planners using probabilities need not deal with this problem because they do not generate their probabilities from observations. This paper presents an event based language in which the planner's probabilities are calculated from the binomial random variable generated by the observed ratio of one type of event to another. Such probabilities are subject to error, so the planner must introspect about their validity. Inferences about the probability of these events can be made using statistics. Inferences about the validity of the approximations can be made using interval estimation. Interval estimation allows the planner to avoid making choices that are only weakly supported by the planner's evidence. ",A Language for Planning with Statistics " Selecting the right reference class and the right interval when faced with conflicting candidates and no possibility of establishing subset style dominance has been a problem for Kyburg's Evidential Probability system. Various methods have been proposed by Loui and Kyburg to solve this problem in a way that is both intuitively appealing and justifiable within Kyburg's framework. The scheme proposed in this paper leads to stronger statistical assertions without sacrificing too much of the intuitive appeal of Kyburg's latest proposal. ",A Modification to Evidential Probability " The belief network is a well-known graphical structure for representing independences in a joint probability distribution. The methods, which perform probabilistic inference in belief networks, often treat the conditional probabilities which are stored in the network as certain values. However, if one takes either a subjectivistic or a limiting frequency approach to probability, one can never be certain of probability values. An algorithm should not only be capable of reporting the probabilities of the alternatives of remaining nodes when other nodes are instantiated; it should also be capable of reporting the uncertainty in these probabilities relative to the uncertainty in the probabilities which are stored in the network. In this paper a method for determining the variances in inferred probabilities is obtained under the assumption that a posterior distribution on the uncertainty variables can be approximated by the prior distribution. It is shown that this assumption is plausible if their is a reasonable amount of confidence in the probabilities which are stored in the network. Furthermore in this paper, a surprising upper bound for the prior variances in the probabilities of the alternatives of all nodes is obtained in the case where the probability distributions of the probabilities of the alternatives are beta distributions. It is shown that the prior variance in the probability at an alternative of a node is bounded above by the largest variance in an element of the conditional probability distribution for that node. ",Investigation of Variances in Belief Networks " At last year?s Uncertainty in AI Conference, we reported the results of a sensitivity analysis study of Pathfinder. Our findings were quite unexpected-slight variations to Pathfinder?s parameters appeared to lead to substantial degradations in system performance. A careful look at our first analysis, together with the valuable feedback provided by the participants of last year?s conference, led us to conduct a follow-up study. Our follow-up differs from our initial study in two ways: (i) the probabilities 0.0 and 1.0 remained unchanged, and (ii) the variations to the probabilities that are close to both ends (0.0 or 1.0) were less than the ones close to the middle (0.5). The results of the follow-up study look more reasonable-slight variations to Pathfinder?s parameters now have little effect on its performance. Taken together, these two sets of results suggest a viable extension of a common decision analytic sensitivity analysis to the larger, more complex settings generally encountered in artificial intelligence. ",A Sensitivity Analysis of Pathfinder: A Follow-up Study " In this paper we study the uses and the semantics of non-monotonic negation in probabilistic deductive data bases. Based on the stable semantics for classical logic programming, we introduce the notion of stable formula, functions. We show that stable formula, functions are minimal fixpoints of operators associated with probabilistic deductive databases with negation. Furthermore, since a. probabilistic deductive database may not necessarily have a stable formula function, we provide a stable class semantics for such databases. Finally, we demonstrate that the proposed semantics can handle default reasoning naturally in the context of probabilistic deduction. ",Non-monotonic Negation in Probabilistic Deductive Databases " We present a general architecture for the monitoring and diagnosis of large scale sensor-based systems with real time diagnostic constraints. This architecture is multileveled, combining a single monitoring level based on statistical methods with two model based diagnostic levels. At each level, sources of uncertainty are identified, and integrated methodologies for uncertainty management are developed. The general architecture was applied to the monitoring and diagnosis of a specific nuclear physics detector at Lawrence Berkeley National Laboratory that contained approximately 5000 components and produced over 500 channels of output data. The general architecture is scalable, and work is ongoing to apply it to detector systems one and two orders of magnitude more complex. ","Management of Uncertainty in the Multi-Level Monitoring and Diagnosis of the Time of Flight Scintillation Array" " The EM-algorithm is a general procedure to get maximum likelihood estimates if part of the observations on the variables of a network are missing. In this paper a stochastic version of the algorithm is adapted to probabilistic neural networks describing the associative dependency of variables. These networks have a probability distribution, which is a special case of the distribution generated by probabilistic inference networks. Hence both types of networks can be combined allowing to integrate probabilistic rules as well as unspecified associations in a sound way. The resulting network may have a number of interesting features including cycles of probabilistic rules, hidden 'unobservable' variables, and uncertain and contradictory evidence. ","Integrating Probabilistic Rules into Neural Networks: A Stochastic EM Learning Algorithm" " This paper presents a simple framework for Horn clause abduction, with probabilities associated with hypotheses. It is shown how this representation can represent any probabilistic knowledge representable in a Bayesian belief network. The main contributions are in finding a relationship between logical and probabilistic notions of evidential reasoning. This can be used as a basis for a new way to implement Bayesian Networks that allows for approximations to the value of the posterior probabilities, and also points to a way that Bayesian networks can be extended beyond a propositional language. ",Representing Bayesian Networks within Probabilistic Horn Abduction " A new probabilistic network construction system, DYNASTY, is proposed for diagnostic reasoning given variables whose probabilities change over time. Diagnostic reasoning is formulated as a sequential stochastic process, and is modeled using influence diagrams. Given a set O of observations, DYNASTY creates an influence diagram in order to devise the best action given O. Sensitivity analyses are conducted to determine if the best network has been created, given the uncertainty in network parameters and topology. DYNASTY uses an equivalence class approach to provide decision thresholds for the sensitivity analysis. This equivalence-class approach to diagnostic reasoning differentiates diagnoses only if the required actions are different. A set of network-topology updating algorithms are proposed for dynamically updating the network when necessary. ",Dynamic Network Updating Techniques For Diagnostic Reasoning " For high level path planning, environments are usually modeled as distance graphs, and path planning problems are reduced to computing the shortest path in distance graphs. One major drawback of this modeling is the inability to model uncertainties, which are often encountered in practice. In this paper, a new tool, called U-yraph, is proposed for environment modeling. A U-graph is an extension of distance graphs with the ability to handle a kind of uncertainty. By modeling an uncertain environment as a U-graph, and a navigation problem as a Markovian decision process, we can precisely define a new optimality criterion for navigation plans, and more importantly, we can come up with a general algorithm for computing optimal plans for navigation tasks. ",High Level Path Planning with Uncertainty " Given a universe of discourse X-a domain of possible outcomes-an experiment may consist of selecting one of its elements, subject to the operation of chance, or of observing the elements, subject to imprecision. A priori uncertainty about the actual result of the experiment may be quantified, representing either the likelihood of the choice of :r_X or the degree to which any such X would be suitable as a description of the outcome. The former case corresponds to a probability distribution, while the latter gives a possibility assignment on X. The study of such assignments and their properties falls within the purview of possibility theory [DP88, Y80, Z783. It, like probability theory, assigns values between 0 and 1 to express likelihoods of outcomes. Here, however, the similarity ends. Possibility theory uses the maximum and minimum functions to combine uncertainties, whereas probability theory uses the plus and times operations. This leads to very dissimilar theories in terms of analytical framework, even though they share several semantic concepts. One of the shared concepts consists of expressing quantitatively the uncertainty associated with a given distribution. In probability theory its value corresponds to the gain of information that would result from conducting an experiment and ascertaining an actual result. This gain of information can equally well be viewed as a decrease in uncertainty about the outcome of an experiment. In this case the standard measure of information, and thus uncertainty, is Shannon entropy [AD75, G77]. It enjoys several advantages-it is characterized uniquely by a few, very natural properties, and it can be conveniently used in decision processes. This application is based on the principle of maximum entropy; it has become a popular method of relating decisions to uncertainty. This paper demonstrates that an equally integrated theory can be built on the foundation of possibility theory. We first show how to define measures of in formation and uncertainty for possibility assignments. Next we construct an information-based metric on the space of all possibility distributions defined on a given domain. It allows us to capture the notion of proximity in information content among the distributions. Lastly, we show that all the above constructions can be carried out for continuous distributions-possibility assignments on arbitrary measurable domains. We consider this step very significant-finite domains of discourse are but approximations of the real-life infinite domains. If possibility theory is to represent real world situations, it must handle continuous distributions both directly and through finite approximations. In the last section we discuss a principle of maximum uncertainty for possibility distributions. We show how such a principle could be formalized as an inference rule. We also suggest it could be derived as a consequence of simple assumptions about combining information. We would like to mention that possibility assignments can be viewed as fuzzy sets and that every fuzzy set gives rise to an assignment of possibilities. This correspondence has far reaching consequences in logic and in control theory. Our treatment here is independent of any special interpretation; in particular we speak of possibility distributions and possibility measures, defining them as measurable mappings into the interval [0, 1]. Our presentation is intended as a self-contained, albeit terse summary. Topics discussed were selected with care, to demonstrate both the completeness and a certain elegance of the theory. Proofs are not included; we only offer illustrative examples. ",Formal Model of Uncertainty for Possibilistic Rules " Deliberation plays an important role in the design of rational agents embedded in the real-world. In particular, deliberation leads to the formation of intentions, i.e., plans of action that the agent is committed to achieving. In this paper, we present a branching time possible-worlds model for representing and reasoning about, beliefs, goals, intentions, time, actions, probabilities, and payoffs. We compare this possible-worlds approach with the more traditional decision tree representation and provide a transformation from decision trees to possible worlds. Finally, we illustrate how an agent can perform deliberation using a decision-tree representation and then use a possible-worlds model to form and reason about his intentions. ",Deliberation and its Role in the Formation of Intentions " During interactions with human consultants, people are used to providing partial and/or inaccurate information, and still be understood and assisted. We attempt to emulate this capability of human consultants; in computer consultation systems. In this paper, we present a mechanism for handling uncertainty in plan recognition during task-oriented consultations. The uncertainty arises while choosing an appropriate interpretation of a user?s statements among many possible interpretations. Our mechanism handles this uncertainty by using probability theory to assess the probabilities of the interpretations, and complements this assessment by taking into account the information content of the interpretations. The information content of an interpretation is a measure of how well defined an interpretation is in terms of the actions to be performed on the basis of the interpretation. This measure is used to guide the inference process towards interpretations with a higher information content. The information content for an interpretation depends on the specificity and the strength of the inferences in it, where the strength of an inference depends on the reliability of the information on which the inference is based. Our mechanism has been developed for use in task-oriented consultation systems. The domain that we have chosen for exploration is that of a travel agency. ","Handling Uncertainty during Plan Recognition in Task-Oriented Consultation Systems" " This paper introduces conceptual relations that synthesize utilitarian and logical concepts, extending the logics of preference of Rescher. We define first, in the context of a possible worlds model, constraint-dependent measures that quantify the relative quality of alternative solutions of reasoning problems or the relative desirability of various policies in control, decision, and planning problems. We show that these measures may be interpreted as truth values in a multi valued logic and propose mechanisms for the representation of complex constraints as combinations of simpler restrictions. These extended logical operations permit also the combination and aggregation of goal-specific quality measures into global measures of utility. We identify also relations that represent differential preferences between alternative solutions and relate them to the previously defined desirability measures. Extending conventional modal logic formulations, we introduce structures for the representation of ignorance about the utility of alternative solutions. Finally, we examine relations between these concepts and similarity based semantic models of fuzzy logic. ",Truth as Utility: A Conceptual Synthesis " We present PULCinella and its use in comparing uncertainty theories. PULCinella is a general tool for Propagating Uncertainty based on the Local Computation technique of Shafer and Shenoy. It may be specialized to different uncertainty theories: at the moment, Pulcinella can propagate probabilities, belief functions, Boolean values, and possibilities. Moreover, Pulcinella allows the user to easily define his own specializations. To illustrate Pulcinella, we analyze two examples by using each of the four theories above. In the first one, we mainly focus on intrinsic differences between theories. In the second one, we take a knowledge engineer viewpoint, and check the adequacy of each theory to a given problem. ","Pulcinella: A General Tool for Propagating Uncertainty in Valuation Networks" " In this article we present two ways of structuring bodies of evidence, which allow us to reduce the complexity of the operations usually performed in the framework of evidence theory. The first structure just partitions the focal elements in a body of evidence by their cardinality. With this structure we are able to reduce the complexity on the calculation of the belief functions Bel, Pl, and Q. The other structure proposed here, the Hierarchical Trees, permits us to reduce the complexity of the calculation of Bel, Pl, and Q, as well as of the Dempster's rule of combination in relation to the brute-force algorithm. Both these structures do not require the generation of all the subsets of the reference domain. ",Structuring Bodies of Evidence " In general, the best explanation for a given observation makes no promises on how good it is with respect to other alternative explanations. A major deficiency of message-passing schemes for belief revision in Bayesian networks is their inability to generate alternatives beyond the second best. In this paper, we present a general approach based on linear constraint systems that naturally generates alternative explanations in an orderly and highly efficient manner. This approach is then applied to cost-based abduction problems as well as belief revision in Bayesian net works. ","On the Generation of Alternative Explanations with Implications for Belief Revision" " A control strategy for expert systems is presented which is based on Shafer's Belief theory and the combination rule of Dempster. In contrast to well known strategies it is not sequentially and hypotheses-driven, but parallel and self organizing, determined by the concept of information gain. The information gain, calculated as the maximal difference between the actual evidence distribution in the knowledge base and the potential evidence determines each consultation step. Hierarchically structured knowledge is an important representation form and experts even use several hierarchies in parallel for constituting their knowledge. Hence the control strategy is applied to a layered set of distinct hierarchies. Depending on the actual data one of these hierarchies is chosen by the control strategy for the next step in the reasoning process. Provided the actual data are well matched to the structure of one hierarchy, this hierarchy remains selected for a longer consultation time. If no good match can be achieved, a switch from the actual hierarchy to a competing one will result, very similar to the phenomenon of restructuring in problem solving tasks. Up to now the control strategy is restricted to multi hierarchical knowledge bases with disjunct hierarchies. It is implemented in the expert system IBIG (inference by information gain), being presently applied to acquired speech disorders (aphasia). ",Completing Knowledge by Competing Hierarchies " The graphoid axioms for conditional independence, originally described by Dawid [1979], are fundamental to probabilistic reasoning [Pearl, 19881. Such axioms provide a mechanism for manipulating conditional independence assertions without resorting to their numerical definition. This paper explores a representation for independence statements using multiple undirected graphs and some simple graphical transformations. The independence statements derivable in this system are equivalent to those obtainable by the graphoid axioms. Therefore, this is a purely graphical proof technique for conditional independence. ",A Graph-Based Inference Method for Conditional Independence " This paper proposes a new method for solving Bayesian decision problems. The method consists of representing a Bayesian decision problem as a valuation-based system and applying a fusion algorithm for solving it. The fusion algorithm is a hybrid of local computational methods for computation of marginals of joint probability distributions and the local computational methods for discrete optimization problems. ",A Fusion Algorithm for Solving Bayesian Decision Problems " Irrelevance-based partial MAPs are useful constructs for domain-independent explanation using belief networks. We look at two definitions for such partial MAPs, and prove important properties that are useful in designing algorithms for computing them effectively. We make use of these properties in modifying our standard MAP best-first algorithm, so as to handle irrelevance-based partial MAPs. ",Algorithms for Irrelevance-Based Partial MAPs " Survey of several forms of updating, with a practical illustrative example. We study several updating (conditioning) schemes that emerge naturally from a common scenarion to provide some insights into their meaning. Updating is a subtle operation and there is no single method, no single 'good' rule. The choice of the appropriate rule must always be given due consideration. Planchet (1989) presents a mathematical survey of many rules. We focus on the practical meaning of these rules. After summarizing the several rules for conditioning, we present an illustrative example in which the various forms of conditioning can be explained. ",About Updating " In probabilistic logic entailments, even moderate size problems can yield linear constraint systems with so many variables that exact methods are impractical. This difficulty can be remedied in many cases of interest by introducing a three valued logic (true, false, and ""don't care""). The three-valued approach allows the construction of ""compressed"" constraint systems which have the same solution sets as their two-valued counterparts, but which may involve dramatically fewer variables. Techniques to calculate point estimates for the posterior probabilities of entailed sentences are discussed. ",Compressed Constraints in Probabilistic Logic and Their Revision " The presence of latent variables can greatly complicate inferences about causal relations between measured variables from statistical data. In many cases, the presence of latent variables makes it impossible to determine for two measured variables A and B, whether A causes B, B causes A, or there is some common cause. In this paper I present several theorems that state conditions under which it is possible to reliably infer the causal relation between two measured variables, regardless of whether latent variables are acting or not. ",Detecting Causal Relations in the Presence of Unmeasured Variables " In mechanical design, there is often unavoidable uncertainty in estimates of design performance. Evaluation of design alternatives requires consideration of the impact of this uncertainty. Expert heuristics embody assumptions regarding the designer's attitude towards risk and uncertainty that might be reasonable in most cases but inaccurate in others. We present a technique to allow designers to incorporate their own unique attitude towards uncertainty as opposed to those assumed by the domain expert's rules. The general approach is to eliminate aspects of heuristic rules which directly or indirectly include assumptions regarding the user's attitude towards risk, and replace them with explicit, user-specified probabilistic multi attribute utility and probability distribution functions. We illustrate the method in a system for material selection for automobile bumpers. ","A Method for Integrating Utility Analysis into an Expert System for Design Evaluation" " The relationship between belief networks and relational databases is examined. Based on this analysis, a method to construct belief networks automatically from statistical relational data is proposed. A comparison between our method and other methods shows that our method has several advantages when generalization or prediction is deeded. ",From Relational Databases to Belief Networks " A very computationally-efficient Monte-Carlo algorithm for the calculation of Dempster-Shafer belief is described. If Bel is the combination using Dempster's Rule of belief functions Bel, ..., Bel,7, then, for subset b of the frame C), Bel(b) can be calculated in time linear in 1(31 and m (given that the weight of conflict is bounded). The algorithm can also be used to improve the complexity of the Shenoy-Shafer algorithms on Markov trees, and be generalised to calculate Dempster-Shafer Belief over other logics. ",A Monte-Carlo Algorithm for Dempster-Shafer Belief " The compatibility of quantitative and qualitative representations of beliefs was studied extensively in probability theory. It is only recently that this important topic is considered in the context of belief functions. In this paper, the compatibility of various quantitative belief measures and qualitative belief structures is investigated. Four classes of belief measures considered are: the probability function, the monotonic belief function, Shafer's belief function, and Smets' generalized belief function. The analysis of their individual compatibility with different belief structures not only provides a sound b