aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0808.3971
2949251435
A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination to enhance the sum rate and limited inter-cluster coordination to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.
With BTSs coordinating for transmission, it forms an effective MU-MIMO broadcast channel, for which DPC has been shown to be an optimal precoding technique @cite_20 @cite_39 @cite_46 @cite_19 @cite_6 . DPC, while theoretically optimal, is an information theoretic concept that is difficult to implement in practice. A more practical precoding technique for broadcast MIMO channels is @cite_23 @cite_7 @cite_49 @cite_8 @cite_47 , which provides each user an interference-free channel with properly designed linear precoding matrices. In addition, it was shown in @cite_29 that BD can achieve a significant part of the ergodic sum capacity of DPC. Therefore, we will apply BD in the multi-cell scenario as the precoding technique for the proposed BTS coordination.
{ "cite_N": [ "@cite_47", "@cite_7", "@cite_8", "@cite_29", "@cite_6", "@cite_39", "@cite_19", "@cite_23", "@cite_49", "@cite_46", "@cite_20" ], "mid": [ "2101790209", "2098126478", "", "2166579323", "2030546921", "", "", "2111451941", "2156004493", "2103749601", "2151795416" ], "abstract": [ "Block diagonalization is one approach for linear preceding in the multiple-input multiple-output broadcast channel that sends multiple interference free data streams to different users in the same cell. Unfortunately, block diagonalization neglects other-cell interference (OCI), which limits the performance of users at the edge of the cell. This paper presents an OCI-aware enhancement to block diagonalization that uses a whitening filter for interference suppression at the receiver and a novel precoder using the interference-plus-noise covariance matrix for each user at the transmitter. For complex Gaussian matrix channels, the asymptotic sum rate of the proposed system is analyzed under a large antenna assumption for isotropic inputs and compared to conventional block diagonalization. The capacity loss due to OCI is quantified in terms of results from single-user MIMO capacity. Several numerical examples compare achievable sum rates, the proposed asymptotic rates, and the capacity loss, in low and high interference regimes.", "The use of space-division multiple access (SDMA) in the downlink of a multiuser multiple-input, multiple-output (MIMO) wireless communications network can provide a substantial gain in system throughput. The challenge in such multiuser systems is designing transmit vectors while considering the co-channel interference of other users. Typical optimization problems of interest include the capacity problem - maximizing the sum information rate subject to a power constraint-or the power control problem-minimizing transmitted power such that a certain quality-of-service metric for each user is met. Neither of these problems possess closed-form solutions for the general multiuser MIMO channel, but the imposition of certain constraints can lead to closed-form solutions. This paper presents two such constrained solutions. The first, referred to as \"block-diagonalization,\" is a generalization of channel inversion when there are multiple antennas at each receiver. It is easily adapted to optimize for either maximum transmission rate or minimum power and approaches the optimal solution at high SNR. The second, known as \"successive optimization,\" is an alternative method for solving the power minimization problem one user at a time, and it yields superior results in some (e.g., low SNR) situations. Both of these algorithms are limited to cases where the transmitter has more antennas than all receive antennas combined. In order to accommodate more general scenarios, we also propose a framework for coordinated transmitter-receiver processing that generalizes the two algorithms to cases involving more receive than transmit antennas. While the proposed algorithms are suboptimal, they lead to simpler transmitter and receiver structures and allow for a reasonable tradeoff between performance and complexity.", "", "The sum capacity of a Gaussian broadcast MIMO channel can be achieved with dirty paper coding (DPC). However, algorithms that approach the DPC sum capacity do not appear viable in the forseeable future, which motivates lower complexity interference suppression techniques. Block diagonalization (BD) is a linear preceding technique for downlink multiuser MIMO systems. With perfect channel knowledge at the transmitter, BD can eliminate other users' interference at each receiver. In this paper, we study the sum capacity of BD with and without receive antenna selection. We analytically compare BD without receive antenna selection to DPC for a set of given channels. It is shown that (1) if the user channels are orthogonal to each other, then BD achieves the same sum capacity as DPC; (2) if the user channels lie in the same subspace, then the gain of DPC over BD can be upper bounded by the minimum of the number of transmit and receive antennas. These observations also hold for BD with receive antenna selection. Further, we study the ergodic sum capacity of BD with and without receive antenna selection in a Rayleigh fading channel. Simulations show that BD can achieve a significant part of the total throughput of DPC. An upper bound on the ergodic sum capacity gain of DPC over BD is proposed for easy estimation of the gap between the sum capacity of DPC and BD without receive antenna selection.", "The Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC) is considered. The dirty-paper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is optimal for the nondegraded case. Furthermore, the capacity region is characterized under a wide range of input constraints, accounting, as special cases, for the total power and the per-antenna power constraints", "", "", "We introduce a transmit preprocessing technique for the downlink of multiuser multiple-input multiple-output (MIMO) systems. It decomposes the multiuser MIMO downlink channel into multiple parallel independent single-user MIMO downlink channels. Some key properties are that each equivalent single-user MIMO channel has the same properties as a conventional single-user MIMO channel, and that increasing the number of transmit antennas of the multiuser system by one increases the number of spatial channels to each user by one. Simulation results are also provided and these results demonstrate the potential of our technique in terms of performance and capacity.", "This paper addresses the problem of performing orthogonal space-division multiplexing (OSDM) for downlink, point-to-multipoint communications when multiple antennas are utilized at the base station (BS) and (optionally) all mobile stations (MS). Based on a closed-form antenna weight solution for single-user multiple-input multiple-output communications in the presence of other receiver points, we devise an iterative algorithm that finds the multiuser antenna weights for OSDM in downlink or broadcast channels. Upon convergence, each mobile user will receive only the desired activated spatial modes with no cochannel interference. Necessary and sufficient conditions for the existence of OSDM among the number of mobile users, the number of transmit antennas at the BS, and the number of receive antennas at the MS, are also derived. The assumption for the proposed method is that the BS knows the channels for all MS's and that the channel dynamics are quasi-stationary.", "We characterize the sum capacity of the vector Gaussian broadcast channel by showing that the existing inner bound of Marton and the existing upper bound of Sato are tight for this channel. We exploit an intimate four-way connection between the vector broadcast channel, the corresponding point-to-point channel (where the receivers can cooperate), the multiple-access channel (MAC) (where the role of transmitters and receivers are reversed), and the corresponding point-to-point channel (where the transmitters can cooperate).", "A Gaussian broadcast channel (GBC) with r single-antenna receivers and t antennas at the transmitter is considered. Both transmitter and receivers have perfect knowledge of the channel. Despite its apparent simplicity, this model is, in general, a nondegraded broadcast channel (BC), for which the capacity region is not fully known. For the two-user case, we find a special case of Marton's (1979) region that achieves optimal sum-rate (throughput). In brief, the transmitter decomposes the channel into two interference channels, where interference is caused by the other user signal. Users are successively encoded, such that encoding of the second user is based on the noncausal knowledge of the interference caused by the first user. The crosstalk parameters are optimized such that the overall throughput is maximum and, surprisingly, this is shown to be optimal over all possible strategies (not only with respect to Marton's achievable region). For the case of r>2 users, we find a somewhat simpler choice of Marton's region based on ordering and successively encoding the users. For each user i in the given ordering, the interference caused by users j>i is eliminated by zero forcing at the transmitter, while interference caused by users j<i is taken into account by coding for noncausally known interference. Under certain mild conditions, this scheme is found to be throughput-wise asymptotically optimal for both high and low signal-to-noise ratio (SNR). We conclude by providing some numerical results for the ergodic throughput of the simplified zero-forcing scheme in independent Rayleigh fading." ] }
0808.3231
2027266161
In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.
Much work has been devoted to the learning of multi-label examples under the umbrella of . Note that multi-label learning studies the problem where a real-world object described by one instance is associated with a number of class labels Most work on multi-label learning assume that an instance can be associated with multiple valid labels, but there are also some work assuming that only one of the labels among those associated with an instance is correct @cite_14 . , which is different from multi-class learning or multi-task learning @cite_1 . In multi-class learning each object is only associated with a single label; while in multi-task learning different tasks may involve different domains and different data sets. Actually, traditional two-class and multi-class problems can both be cast into multi-label problems by restricting that each instance has only one label. The generality of multi-label problems, however, inevitably makes it more difficult to address.
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2144372981", "2144752499" ], "abstract": [ "In this paper, we study a special kind of learning problem in which each training instance is given a set of (or distribution over) candidate class labels and only one of the candidate labels is the correct one. Such a problem can occur, e.g., in an information retrieval setting where a set of words is associated with an image, or if classes labels are organized hierarchically. We propose a novel discriminative approach for handling the ambiguity of class labels in the training examples. The experiments with the proposed approach over five different UCI datasets show that our approach is able to find the correct label among the set of candidate labels and actually achieve performance close to the case when each training instance is given a single correct label. In contrast, naive methods degrade rapidly as more ambiguity is introduced into the labels.", "We study the problem of learning many related tasks simultaneously using kernel methods and regularization. The standard single-task kernel methods, such as support vector machines and regularization networks, are extended to the case of multi-task learning. Our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used. These kernels model relations among the tasks and are derived from a novel form of regularizers. Specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets. In agreement with past empirical work on multi-task learning, the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task." ] }
0808.3231
2027266161
In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.
Many other multi-label learning algorithms have been developed, such as decision trees, neural networks, @math -nearest neighbor classifiers, support vector machines, etc. Clare and King @cite_82 developed a multi-label version of C4.5 decision tree through modifying the definition of entropy. Zhang and Zhou @cite_49 presented multi-label neural network , which is derived from the Backpropagation algorithm by employing an error function to capture the fact that the labels belonging to an instance should be ranked higher than those not belonging to that instance. Zhang and Zhou @cite_50 also proposed the algorithm, which identifies the @math nearest neighbors of the concerned instance and then assigns labels according to the maximum a posteriori principle. Elisseeff and Weston @cite_89 proposed the RankSvm algorithm for multi-label learning by defining a specific cost function and the corresponding margin for multi-label models. Other kinds of multi-label s have been developed by @cite_48 and Godbole and Sarawagi @cite_0 . In particular, by hierarchically approximating the Bayes optimal classifier for the H-loss, Cesa- @cite_38 proposed an algorithm which outperforms simple hierarchical s. Recently, non-negative matrix factorization has also been applied to multi-label learning @cite_60 , and multi-label dimensionality reduction methods have been developed @cite_24 @cite_72 .
{ "cite_N": [ "@cite_38", "@cite_60", "@cite_48", "@cite_89", "@cite_0", "@cite_24", "@cite_72", "@cite_50", "@cite_49", "@cite_82" ], "mid": [ "2091961126", "2056974656", "2156935079", "2118712128", "66588809", "2146012283", "1972490990", "2052684427", "2119466907", "1753402186" ], "abstract": [ "We study hierarchical classification in the general case when an instance could belong to more than one class node in the underlying taxonomy. Experiments done in previous work showed that a simple hierarchy of Support Vectors Machines (SVM) with a top-down evaluation scheme has a surprisingly good performance on this kind of task. In this paper, we introduce a refined evaluation scheme which turns the hierarchical SVM classifier into an approximator of the Bayes optimal classifier with respect to a simple stochastic model for the labels. Experiments on synthetic datasets, generated according to this stochastic model, show that our refined algorithm outperforms the simple hierarchical SVM. On real-world data, however, the advantage brought by our approach is a bit less clear. We conjecture this is due to a higher noise rate for the training labels in the low levels of the taxonomy.", "We present a novel framework for multi-label learning that explicitly addresses the challenge arising from the large number of classes and a small size of training data. The key assumption behind this work is that two examples tend to have large overlap in their assigned class memberships if they share high similarity in their input patterns. We capitalize this assumption by first computing two sets of similarities, one based on the input patterns of examples, and the other based on the class memberships of the examples. We then search for the optimal assignment of class memberships to the unlabeled data that minimizes the difference between these two sets of similarities. The optimization problem is formulated as a constrained Non-negative Matrix Factorization (NMF) problem, and an algorithm is presented to efficiently find the solution. Compared to the existing approaches for multi-label learning, the proposed approach is advantageous in that it is able to explore both the unlabeled data and the correlation among different classes simultaneously. Experiments with text categorization show that our approach performs significantly better than several state-of-the-art classification techniques when the number of classes is large and the size of training data is small.", "In classic pattern recognition problems, classes are mutually exclusive by definition. Classification errors occur when the classes overlap in the feature space. We examine a different situation, occurring when the classes are, by definition, not mutually exclusive. Such problems arise in semantic scene and document classification and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a field scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a different treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classification; furthermore, our work appears to generalize to other classification problems of the same nature.", "This article presents a Support Vector Machine (SVM) like learning system to handle multi-label problems. Such problems are usually decomposed into many two-class problems but the expressive power of such a system can be weak [5, 7]. We explore a new direct approach. It is based on a large margin ranking system that shares a lot of common properties with SVMs. We tested it on a Yeast gene functional classification problem with positive results.", "In this paper we present methods of enhancing existing discriminative classifiers for multi-labeled predictions. Discriminative methods like support vector machines perform very well for uni-labeled text classification tasks. Multi-labeled classification is a harder task subject to relatively less attention. In the multi-labeled setting, classes are often related to each other or part of a is-a hierarchy. We present a new technique for combining text features and features indicating relationships between classes, which can be used with any discriminative algorithm. We also present two enhancements to the margin of SVMs for building better models in the presence of overlapping classes. We present results of experiments on real world text benchmark datasets. Our new methods beat accuracy of existing methods with statistically significant improvements.", "Latent semantic indexing (LSI) is a well-known unsupervised approach for dimensionality reduction in information retrieval. However if the output information (i.e. category labels) is available, it is often beneficial to derive the indexing not only based on the inputs but also on the target values in the training data set. This is of particular importance in applications with multiple labels, in which each document can belong to several categories simultaneously. In this paper we introduce the multi-label informed latent semantic indexing (MLSI) algorithm which preserves the information of inputs and meanwhile captures the correlations between the multiple outputs. The recovered \"latent semantics\" thus incorporate the human-annotated category information and can be used to greatly improve the prediction accuracy. Empirical study based on two data sets, Reuters-21578 and RCV1, demonstrates very encouraging results.", "Multilabel learning deals with data associated with multiple labels simultaneously. Like other data mining and machine learning tasks, multilabel learning also suffers from the curse of dimensionality. Dimensionality reduction has been studied for many years, however, multilabel dimensionality reduction remains almost untouched. In this article, we propose a multilabel dimensionality reduction method, MDDM, with two kinds of projection strategies, attempting to project the original data into a lower-dimensional feature space maximizing the dependence between the original feature description and the associated class labels. Based on the Hilbert-Schmidt Independence Criterion, we derive a eigen-decomposition problem which enables the dimensionality reduction process to be efficient. Experiments validate the performance of MDDM.", "Multi-label learning originated from the investigation of text categorization problem, where each document may belong to several predefined topics simultaneously. In multi-label learning, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets. In this paper, a multi-label lazy learning approach named ML-KNN is presented, which is derived from the traditional K-nearest neighbor (KNN) algorithm. In detail, for each unseen instance, its K nearest neighbors in the training set are firstly identified. After that, based on statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the unseen instance. Experiments on three different real-world multi-label learning problems, i.e. Yeast gene functional analysis, natural scene classification and automatic web page categorization, show that ML-KNN achieves superior performance to some well-established multi-label learning algorithms.", "In multilabel learning, each instance in the training set is associated with a set of labels and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, this problem is addressed in the way that a neural network algorithm named BP-MLL, i.e., backpropagation for multilabel learning, is proposed. It is derived from the popular backpropagation algorithm through employing a novel error function capturing the characteristics of multilabel learning, i.e., the labels belonging to an instance should be ranked higher than those not belonging to that instance. Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multilabel learning algorithms", "The biological sciences are undergoing an explosion in the amount of available data. New data analysis methods are needed to deal with the data. We present work using KDD to analyse data from mutant phenotype growth experiments with the yeast S. cerevisiae to predict novel gene functions. The analysis of the data presented a number of challenges: multi-class labels, a large number of sparsely populated classes, the need to learn a set of accurate rules (not a complete classification), and a very large amount of missing values. We developed resampling strategies and modified the algorithm C4.5 to deal with these problems. Rules were learnt which are accurate and biologically meaningful. The rules predict function of 83 putative genes of currently unknown function at an estimated accuracy of ≥ 80 ." ] }
0808.3231
2027266161
In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.
Roughly speaking, earlier approaches to multi-label learning attempt to divide multi-label learning to a number of two-class classification problems @cite_57 @cite_20 or transform it into a label ranking problem @cite_4 @cite_89 , while some later approaches try to exploit the correlation between the labels @cite_84 @cite_60 @cite_72 .
{ "cite_N": [ "@cite_4", "@cite_60", "@cite_84", "@cite_89", "@cite_57", "@cite_72", "@cite_20" ], "mid": [ "2053463056", "2056974656", "2129414564", "2118712128", "2149684865", "1972490990", "2114535528" ], "abstract": [ "This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. Our approach is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identification from unconstrained spoken customer responses.", "We present a novel framework for multi-label learning that explicitly addresses the challenge arising from the large number of classes and a small size of training data. The key assumption behind this work is that two examples tend to have large overlap in their assigned class memberships if they share high similarity in their input patterns. We capitalize this assumption by first computing two sets of similarities, one based on the input patterns of examples, and the other based on the class memberships of the examples. We then search for the optimal assignment of class memberships to the unlabeled data that minimizes the difference between these two sets of similarities. The optimization problem is formulated as a constrained Non-negative Matrix Factorization (NMF) problem, and an algorithm is presented to efficiently find the solution. Compared to the existing approaches for multi-label learning, the proposed approach is advantageous in that it is able to explore both the unlabeled data and the correlation among different classes simultaneously. Experiments with text categorization show that our approach performs significantly better than several state-of-the-art classification techniques when the number of classes is large and the size of training data is small.", "We propose probabilistic generative models, called parametric mixture models (PMMs), for multiclass, multi-labeled text categorization problem. Conventionally, the binary classification approach has been employed, in which whether or not text belongs to a category is judged by the binary classifier for every category. In contrast, our approach can simultaneously detect multiple categories of text using PMMs. We derive efficient learning and prediction algorithms for PMMs. We also empirically show that our method could significantly outperform the conventional binary methods when applied to multi-labeled text categorization using real World Wide Web pages.", "This article presents a Support Vector Machine (SVM) like learning system to handle multi-label problems. Such problems are usually decomposed into many two-class problems but the expressive power of such a system can be weak [5, 7]. We explore a new direct approach. It is based on a large margin ranking system that shares a lot of common properties with SVMs. We tested it on a Yeast gene functional classification problem with positive results.", "This paper explores the use of Support Vector Machines (SVMs) for learning text classifiers from examples. It analyzes the particular properties of learning with text data and identifies why SVMs are appropriate for this task. Empirical results support the theoretical findings. SVMs achieve substantial improvements over the currently best performing methods and behave robustly over a variety of different learning tasks. Furthermore they are fully automatic, eliminating the need for manual parameter tuning.", "Multilabel learning deals with data associated with multiple labels simultaneously. Like other data mining and machine learning tasks, multilabel learning also suffers from the curse of dimensionality. Dimensionality reduction has been studied for many years, however, multilabel dimensionality reduction remains almost untouched. In this article, we propose a multilabel dimensionality reduction method, MDDM, with two kinds of projection strategies, attempting to project the original data into a lower-dimensional feature space maximizing the dependence between the original feature description and the associated class labels. Based on the Hilbert-Schmidt Independence Criterion, we derive a eigen-decomposition problem which enables the dimensionality reduction process to be efficient. Experiments validate the performance of MDDM.", "This paper focuses on a comparative evaluation of a wide-range of text categorization methods, including previously published results on the Reuters corpus and new results of additional experiments. A controlled study using three classifiers, kNN, LLSF and WORD, was conducted to examine the impact of configuration variations in five versions of Reuters on the observed performance of classifiers. Analysis and empirical evidence suggest that the evaluation results on some versions of Reuters were significantly affected by the inclusion of a large portion of unlabelled documents, mading those results difficult to interpret and leading to considerable confusions in the literature. Using the results evaluated on the other versions of Reuters which exclude the unlabelled documents, the performance of twelve methods are compared directly or indirectly. For indirect compararions, kNN, LLSF and WORD were used as baselines, since they were evaluated on all versions of Reuters that exclude the unlabelled documents. As a global observation, kNN, LLSF and a neural network method had the best performances except for a Naive Bayes approach, the other learning algorithms also performed relatively well." ] }
0808.3231
2027266161
In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.
There is a lot of research on Multi-instance learning , which studies the problem where a real-world object described by a number of instances is associated with a single class label. Here the training set is composed of many bags each containing multiple instances; a bag is labeled positively if it contains at least one positive instance and negatively otherwise. The goal is to label unseen bags correctly. Note that although the training bags are labeled, the labels of their instances are unknown. This learning framework was formalized by @cite_87 when they were investigating drug activity prediction.
{ "cite_N": [ "@cite_87" ], "mid": [ "2110119381" ], "abstract": [ "The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89 correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms." ] }
0808.3231
2027266161
In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.
It is worth mentioning that the standard multi-instance learning @cite_87 assumes that if a bag contains a positive instance then the bag is positive; this implies that there exists a in a positive bag. Many algorithms were designed based on this assumption. For example, the point with maximal diverse density identified by the Diverse Density algorithm @cite_6 actually corresponds to a key instance; many algorithms defined the margin of a positive bag by the margin of its positive instance @cite_54 @cite_88 . As the research of multi-instance learning goes on, however, some other assumptions have been introduced. For example, in contrast to assuming that there is a key instance, some work assumed that there is no key instance and every instance contributes to the bag label @cite_35 @cite_51 . There is also an argument that the instances in the bags should not be treated independently @cite_12 . All those assumptions have been put under the umbrella of multi-instance learning, and generally, in tackling real tasks it is difficult to know which assumption is the fittest. In other words, in different tasks multi-instance learning algorithms based on different assumptions may have different superiorities.
{ "cite_N": [ "@cite_35", "@cite_87", "@cite_54", "@cite_6", "@cite_88", "@cite_51", "@cite_12" ], "mid": [ "1560331282", "2110119381", "2108745803", "2154318594", "1969602766", "2098166271", "2098239572" ], "abstract": [ "In this paper we upgrade linear logistic regression and boosting to multi-instance data, where each example consists of a labeled bag of instances. This is done by connecting predictions for individual instances to a bag-level probability estimate by simple averaging and maximizing the likelihood at the bag level—in other words, by assuming that all instances contribute equally and independently to a bag’s label. We present empirical results for artificial data generated according to the underlying generative model that we assume, and also show that the two algorithms produce competitive results on the Musk benchmark datasets.", "The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89 correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms.", "This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristic ally. Our generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing and document categorization.", "Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem.", "This paper focuses on kernel methods for multi-instance learning. Existing methods require the prediction of the bag to be identical to the maximum of those of its individual instances. However, this is too restrictive as only the sign is important in classification. In this paper, we provide a more complete regularization framework for MI learning by allowing the use of different loss functions between the outputs of a bag and its associated instances. This is especially important as we generalize this for multi-instance regression. Moreover, both bag and instance information can now be directly used in the optimization. Instead of using heuristics to solve the resultant non-linear optimization problem, we use the constrained concave-convex procedure which has well-studied convergence properties. Experiments on both classification and regression data sets show that the proposed method leads to improved performance.", "Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty", "Multi-instance learning and semi-supervised learning are different branches of machine learning. The former attempts to learn from a training set consists of labeled bags each containing many unlabeled instances; the latter tries to exploit abundant unlabeled instances when learning with a small number of labeled examples. In this paper, we establish a bridge between these two branches by showing that multi-instance learning can be viewed as a special case of semi-supervised learning. Based on this recognition, we propose the MissSVM algorithm which addresses multi-instance learning using a special semi-supervised support vector machine. Experiments show that solving multi-instance problems from the view of semi-supervised learning is feasible, and the MissSVM algorithm is competitive with state-of-the-art multi-instance learning algorithms." ] }
0808.3231
2027266161
In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.
In the early years of the research of multi-instance learning, most work was on multi-instance classification with discrete-valued outputs. Later, multi-instance regression with real-valued outputs was studied @cite_23 @cite_42 , and different versions of generalized multi-instance learning have been defined @cite_58 @cite_65 . The main difference between standard multi-instance learning and generalized multi-instance learning is that in standard multi-instance learning there is a single concept, and a bag is positive if it has an instance satisfying this concept; while in generalized multi-instance learning @cite_58 @cite_65 there are multiple concepts, and a bag is positive only when all concepts are satisfied (i.e., the bag contains instances from every concept). Recently, research on multi-instance clustering @cite_10 , multi-instance semi-supervised learning @cite_29 and multi-instance active learning @cite_66 have also been reported.
{ "cite_N": [ "@cite_29", "@cite_42", "@cite_65", "@cite_23", "@cite_58", "@cite_10", "@cite_66" ], "mid": [ "2076752034", "", "2132786922", "1581034630", "1544144649", "2034445978", "2128678390" ], "abstract": [ "There has been much work on applying multiple-instance (MI) learning to content-based image retrieval (CBIR) where the goal is to rank all images in a known repository using a small labeled data set. Most existing MI learning algorithms are non-transductive in that the images in the repository serve only as test data and are not used in the learning process. We present MISSL (Multiple-Instance Semi-Supervised Learning) that transforms any MI problem into an input for a graph-based single-instance semi-supervised learning method that encodes the MI aspects of the problem simultaneously working at both the bag and point levels. Unlike most prior MI learning algorithms, MISSL makes use of the unlabeled data.", "", "We describe a generalisation of the multiple-instance learning model in which a bag's label is not based on a single instance's proximity to a single target point. Rather, a bag is positive if and only if it contains a collection of instances, each near one of a set of target points. We then adapt a learning-theoretic algorithm for learning in this model and present empirical results on data from robot vision, content-based image retrieval, and protein sequence identification.", "", "In traditional multi-instance (MI) learning, a single positive instance in a bag produces a positive class label. Hence, the learner knows how the bag's class label depends on the labels of the instances in the bag and can explicitly use this information to solve the learning task. In this paper we investigate a generalized view of the MI problem where this simple assumption no longer holds. We assume that an \"interaction\" between instances in a bag determines the class label. Our two-level learning method for this type of problem transforms an MI bag into a single meta-instance that can be learned by a standard propositional method. The meta-instance indicates which regions in the instance space are covered by instances of the bag. Results on both artificial and real-world data show that this two-level classification approach is well suited for generalized MI problems.", "In the setting of multi-instance learning, each object is represented by a bag composed of multiple instances instead of by a single instance in a traditional learning setting. Previous works in this area only concern multi-instance prediction problems where each bag is associated with a binary (classification) or real-valued (regression) label. However, unsupervised multi-instance learning where bags are without labels has not been studied. In this paper, the problem of unsupervised multi-instance learning is addressed where a multi-instance clustering algorithm named Bamic is proposed. Briefly, by regarding bags as atomic data items and using some form of distance metric to measure distances between bags, Bamic adapts the popular k -Medoids algorithm to partition the unlabeled training bags into k disjoint groups of bags. Furthermore, based on the clustering results, a novel multi-instance prediction algorithm named Bartmip is developed. Firstly, each bag is re-represented by a k-dimensional feature vector, where the value of the i-th feature is set to be the distance between the bag and the medoid of the i-th group. After that, bags are transformed into feature vectors so that common supervised learners are used to learn from the transformed feature vectors each associated with the original bag's label. Extensive experiments show that Bamic could effectively discover the underlying structure of the data set and Bartmip works quite well on various kinds of multi-instance prediction problems.", "We present a framework for active learning in the multiple-instance (MI) setting. In an MI learning problem, instances are naturally organized into bags and it is the bags, instead of individual instances, that are labeled for training. MI learners assume that every instance in a bag labeled negative is actually negative, whereas at least one instance in a bag labeled positive is actually positive. We consider the particular case in which an MI learner is allowed to selectively query unlabeled instances from positive bags. This approach is well motivated in domains in which it is inexpensive to acquire bag labels and possible, but expensive, to acquire instance labels. We describe a method for learning from labels at mixed levels of granularity, and introduce two active query selection strategies motivated by the MI setting. Our experiments show that learning from instance labels can significantly improve performance of a basic MI learning algorithm in two multiple-instance domains: content-based image retrieval and text classification." ] }
0808.3231
2027266161
In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.
Multi-instance learning has also attracted the attention of the Ilp community. It has been suggested that multi-instance problems could be regarded as a bias on inductive logic programming, and the multi-instance paradigm could be the key between the propositional and relational representations, being more expressive than the former, and much easier to learn than the latter @cite_53 . Alphonse and Matwin @cite_78 approximated a relational learning problem by a multi-instance problem, fed the resulting data to feature selection techniques adapted from propositional representations, and then transformed the filtered data back to relational representation for a relational learner. Thus, the expressive power of relational representation and the ease of feature selection on propositional representation are gracefully combined. This work confirms that multi-instance learning can really act as a bridge between propositional and relational learning.
{ "cite_N": [ "@cite_53", "@cite_78" ], "mid": [ "1601368642", "2157831974" ], "abstract": [ "Two contributions are sketched. A first contribution shows that a special case of relational learning can be transformed into attribute-value learning. However, it is much more tractable to stick to the relational representation than to apply the sketched transformation. This provides a sound theoretical justification for inductive logic programming. In a second contribution, we show how existing attribute-value learning techniques and systems can be upgraded towards inductive logic programming using the ‘Leuven’ methodology and illustrate it using the Claudien, Tilde, ICL, Warmr, TIC, MacCent and RRL systems.", "Attribute-value based representations, standard in today's data mining systems, have a limited expressiveness. Inductive Logic Programming provides an interesting alternative, particularly for learning from structured examples whose parts, each with its own attributes, are related to each other by means of first-order predicates. Several subsets of first-order logic (FOL) with different expressive power have been proposed in Inductive Logic Programming (ILP). The challenge lies in the fact that the more expressive the subset of FOL the learner works with, the more critical the dimensionality of the learning task. The Datalog language is expressive enough to represent realistic learning problems when data is given directly in a relational database, making it a suitable tool for data mining. Consequently, it is important to elaborate techniques that will dynamically decrease the dimensionality of learning tasks expressed in Datalog, just as Feature Subset Selection (FSS) techniques do it in attribute-value learning. The idea of re-using these techniques in ILP runs immediately into a problem as ILP examples have variable size and do not share the same set of literals. We propose here the first paradigm that brings Feature Subset Selection to the level of ILP, in languages at least as expressive as Datalog. The main idea is to first perform a change of representation, which approximates the original relational problem by a multi-instance problem. The representation obtained as the result is suitable for FSS techniques which we adapted from attribute-value learning by taking into account some of the characteristics of the data due to the change of representation. We present the simple FSS proposed for the task, the requisite change of representation, and the entire method combining those two algorithms. The method acts as a filter, preprocessing the relational data, prior to the model building, which outputs relational examples with empirically relevant literals. We discuss experiments in which the method was successfully applied to two real-world domains." ] }
0808.3231
2027266161
In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.
Multi-instance learning techniques have already been applied to diverse applications including image categorization @cite_51 @cite_45 , image retrieval @cite_19 @cite_81 , text categorization @cite_54 @cite_66 , web mining @cite_59 , spam detection @cite_9 , computer security @cite_26 , face detection @cite_90 @cite_15 , computer-aided medical diagnosis @cite_64 , etc.
{ "cite_N": [ "@cite_26", "@cite_64", "@cite_90", "@cite_54", "@cite_9", "@cite_19", "@cite_81", "@cite_45", "@cite_59", "@cite_15", "@cite_51", "@cite_66" ], "mid": [ "", "2107143555", "2166010828", "2108745803", "2141588879", "2152195571", "1554039773", "2136595724", "2061458158", "2135502357", "2098166271", "2128678390" ], "abstract": [ "", "Many computer aided diagnosis (CAD) problems can be best modelled as a multiple-instance learning (MIL) problem with unbalanced data: i.e., the training data typically consists of a few positive bags, and a very large number of negative instances. Existing MIL algorithms are much too computationally expensive for these datasets. We describe CH, a framework for learning a Convex Hull representation of multiple instances that is significantly faster than existing MIL algorithms. Our CH framework applies to any standard hyperplane-based learning algorithm, and for some algorithms, is guaranteed to find the global optimal solution. Experimental studies on two different CAD applications further demonstrate that the proposed algorithm significantly improves diagnostic accuracy when compared to both MIL and traditional classifiers. Although not designed for standard MIL problems (which have both positive and negative bags and relatively balanced datasets), comparisons against other MIL methods on benchmark problems also indicate that the proposed method is competitive with the state-of-the-art.", "A good image object detection algorithm is accurate, fast, and does not require exact locations of objects in a training set. We can create such an object detector by taking the architecture of the Viola-Jones detector cascade and training it with a new variant of boosting that we call MIL-Boost. MILBoost uses cost functions from the Multiple Instance Learning literature combined with the AnyBoost framework. We adapt the feature selection criterion of MILBoost to optimize the performance of the Viola-Jones cascade. Experiments show that the detection rate is up to 1.6 times better using MILBoost. This increased detection rate shows the advantage of simultaneously learning the locations and scales of the objects in the training set along with the parameters of the classifier.", "This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristic ally. Our generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing and document categorization.", "Statistical spam filters are known to be vulnerable to adversarial attacks. One of the more common adversarial attacks, known as the good word attack, thwarts spam filters by appending to spam messages sets of \"good\" words, which are words that are common in legitimate email but rare in spam. We present a counterattack strategy that attempts to differentiate spam from legitimate email in the input space by transforming each email into a bag of multiple segments, and subsequently applying multiple instance logistic regression on the bags. We treat each segment in the bag as an instance. An email is classified as spam if at least one instance in the corresponding bag is spam, and as legitimate if all the instances in it are legitimate. We show that a classifier using our multiple instance counterattack strategy is more robust to good word attacks than its single instance counterpart and other single instance learners commonly used in the spam filtering domain.", "In this paper, we develop and test an approach for retrieving images from an image database based on content similarity. First, each picture is divided into many overlapping regions. For each region, the sub-picture is filtered and converted into a feature vector. In this way, each picture is represented by a number of different feature vectors. The user selects positive and negative image examples to train the system. During the training, a multiple-instance learning method known as the diverse density algorithm is employed to determine which feature vector in each image best represents the user's concept, and which dimensions of the feature vectors are important. The system tries to retrieve images with similar feature vectors from the remainder of the database. A variation of the weighted correlation statistic is used to determine image similarity. The approach is tested on a medium-sized database of natural scenes as well as single- and multiple-object images.", "", "Designing computer programs to automatically categorize images using low-level features is a challenging research topic in computer vision. In this paper, we present a new learning technique, which extends Multiple-Instance Learning (MIL), and its application to the problem of region-based image categorization. Images are viewed as bags, each of which contains a number of instances corresponding to regions obtained from image segmentation. The standard MIL problem assumes that a bag is labeled positive if at least one of its instances is positive; otherwise, the bag is negative. In the proposed MIL framework, DD-SVM, a bag label is determined by some number of instances satisfying various properties. DD-SVM first learns a collection of instance prototypes according to a Diverse Density (DD) function. Each instance prototype represents a class of instances that is more likely to appear in bags with the specific label than in the other bags. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new feature space, named the bag feature space. Finally, standard support vector machines are trained in the bag feature space. We provide experimental results on an image categorization problem and a drug activity prediction problem.", "In multi-instance learning, the training set comprises labeled bags that are composed of unlabeled instances, and the task is to predict the labels of unseen bags. In this paper, a web mining problem, i.e. web index recommendation, is investigated from a multi-instance view. In detail, each web index page is regarded as a bag, while each of its linked pages is regarded as an instance. A user favoring an index page means that he or she is interested in at least one page linked by the index. Based on the browsing history of the user, recommendation could be provided for unseen index pages. An algorithm named Fretcit-kNN, which employs the Minimal Hausdorff distance between frequent term sets and utilizes both the references and citers of an unseen bag in determining its label, is proposed to solve the problem. Experiments show that in average the recommendation accuracy of Fretcit-kNN is 81.0 with 71.7 recall and 70.9 precision, which is significantly better than the best algorithm that does not consider the specific characteristics of multi-instance learning, whose performance is 76.3 accuracy with 63.4 recall and 66.1 precision.", "Cascade detectors have been shown to operate extremely rapidly, with high accuracy, and have important applications such as face detection. Driven by this success, cascade learning has been an area of active research in recent years. Nevertheless, there are still challenging technical problems during the training process of cascade detectors. In particular, determining the optimal target detection rate for each stage of the cascade remains an unsolved issue. In this paper, we propose the multiple instance pruning (MIP) algorithm for soft cascades. This algorithm computes a set of thresholds which aggressively terminate computation with no reduction in detection rate or increase in false positive rate on the training dataset. The algorithm is based on two key insights: i) examples that are destined to be rejected by the complete classifier can be safely pruned early; ii) face detection is a multiple instance learning problem. The MIP process is fully automatic and requires no assumptions of probability distributions, statistical independence, or ad hoc intermediate rejection targets. Experimental results on the MIT+CMU dataset demonstrate significant performance advantages.", "Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty", "We present a framework for active learning in the multiple-instance (MI) setting. In an MI learning problem, instances are naturally organized into bags and it is the bags, instead of individual instances, that are labeled for training. MI learners assume that every instance in a bag labeled negative is actually negative, whereas at least one instance in a bag labeled positive is actually positive. We consider the particular case in which an MI learner is allowed to selectively query unlabeled instances from positive bags. This approach is well motivated in domains in which it is inexpensive to acquire bag labels and possible, but expensive, to acquire instance labels. We describe a method for learning from labels at mixed levels of granularity, and introduce two active query selection strategies motivated by the MI setting. Our experiments show that learning from instance labels can significantly improve performance of a basic MI learning algorithm in two multiple-instance domains: content-based image retrieval and text classification." ] }
0808.3394
2136917035
This paper addresses the existence and regularity of weak solutions for a fully parabolic model of chemotaxis, with prevention of overcrowding, that degenerates in a two-sided fashion, including an extra nonlinearity represented by a p-Laplacian diffusion term. To prove the existence of weak solutions, a Schauder fixed-point argument is applied to a regularized problem and the compactness method is used to pass to the limit. The local Holder regularity of weak solutions is established using the method of intrinsic scaling. The results are a contribution to showing, qualitatively, to what extent the properties of the classical Keller–Segel chemotaxis models are preserved in a more general setting. Some numerical examples illustrate the model.
Various results on the H "older regularity of weak solutions to quasilinear parabolic systems are based on the work of DiBenedetto @cite_2 ; the present article also contributes to this direction. Specifically for a chemotaxis model, Bendahmane, Karlsen, and Urbano @cite_4 proved the existence and H "older regularity of weak solutions for a version of for @math . For a detailed description of the intrinsic scaling method and some applications we refer to the books @cite_2 @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_4", "@cite_2" ], "mid": [ "313676928", "2007672184", "644710670" ], "abstract": [ "This set of lectures, which had its origin in a mini course delivered at the Summer Program of IMPA (Rio de Janeiro), is an introduction to intrinsic scaling, a powerful method in the analysis of degenerate and singular PDEs. @PARASPLIT In the first part, the theory is presented from scratch for the model case of the degenerate p-Laplace equation. This approach brings to light what is really essential in the method, leaving aside technical refinements needed to deal with more general equations, and is entirely self-contained. @PARASPLIT The second part deals with three applications of the theory to relevant models arising from flows in porous media and phase transitions. The aim is to convince the reader of the strength of the method as a systematic approach to regularity for this important class of equations.", "We consider a fully parabolic model for chemotaxis with volume-filling effect and a nonlinear diffusion that degenerates in a two-sided fashion. We address the questions of existence of weak solutions and of their regularity by using, respectively, a regularization method and the technique of intrinsic scaling.", "This monograph evolved out of the 1990 Lipschitz Lectures presented by the author at the University of Bonn, Germany. It recounts recent developments in the attempt to understand the local structure of the solutions of degenerate and singular parabolic partial differential equations." ] }
0808.3394
2136917035
This paper addresses the existence and regularity of weak solutions for a fully parabolic model of chemotaxis, with prevention of overcrowding, that degenerates in a two-sided fashion, including an extra nonlinearity represented by a p-Laplacian diffusion term. To prove the existence of weak solutions, a Schauder fixed-point argument is applied to a regularized problem and the compactness method is used to pass to the limit. The local Holder regularity of weak solutions is established using the method of intrinsic scaling. The results are a contribution to showing, qualitatively, to what extent the properties of the classical Keller–Segel chemotaxis models are preserved in a more general setting. Some numerical examples illustrate the model.
Concerning uniqueness of solution, the presence of a nonlinear degenerate diffusion term and a nonlinear transport term represents a disadvantage and we could not obtain the uniqueness of a weak solution. This contrasts with the results by @cite_20 , where the authors prove uniqueness of solutions for a degenerate parabolic-elliptic system set in an unbounded domain, using a method which relies on a continuous dependence estimate from @cite_5 , that does not apply to our problem because it is difficult to bound @math in @math due to the parabolic nature of .
{ "cite_N": [ "@cite_5", "@cite_20" ], "mid": [ "2029813107", "2078234004" ], "abstract": [ "We study nonlinear degenerate parabolic equations where the flux function @math does not depend Lipschitz continuously on the spatial location @math . By properly adapting the \"doubling of variables\" device due to Kružkov [25] and Carrillo [12], we prove a uniqueness result within the class of entropy solutions for the initial value problem. We also prove a result concerning the continuous dependence on the initial data and the flux function for degenerate parabolic equations with flux function of the form @math , where @math is a vector-valued function and @math is a scalar function.", "The aim of this paper is to discuss the effects of linear and nonlinear diffusion in the large time asymptotic behavior of the Keller–Segel model of chemotaxis with volume filling effect. In the linear diffusion case we provide several sufficient conditions for the diffusion part to dominate and yield decay to zero solutions. We also provide an explicit decay rate towards self–similarity. Moreover, we prove that no stationary solutions with positive mass exist. In the nonlinear diffusion case we prove that the asymptotic behavior is fully determined by whether the diffusivity constant in the model is larger or smaller than the threshold value @math . Below this value we have existence of nondecaying solutions and their convergence (along subsequences) to stationary solutions. For @math all compactly supported solutions are proved to decay asymptotically to zero, unlike in the classical models with linear diffusion, where the asymptotic behavior depends on the initial mass." ] }
0808.1744
2949844345
The Trinity (, 2007) spam classification system is based on a distributed hash table that is implemented using a structured peer-to-peer overlay. Such an overlay must be capable of processing hundreds of messages per second, and must be able to route messages to their destination even in the presence of failures and malicious peers that misroute packets or inject fraudulent routing information into the system. Typically there is tension between the requirements to route messages securely and efficiently in the overlay. We describe a secure and efficient routing extension that we developed within the I3 ( 2004) implementation of the Chord ( 2001) overlay. Secure routing is accomplished through several complementary approaches: First, peers in close proximity form overlapping groups that police themselves to identify and mitigate fraudulent routing information. Second, a form of random routing solves the problem of entire packet flows passing through a malicious peer. Third, a message authentication mechanism links each message to it sender, preventing spoofing. Fourth, each peer's identifier links the peer to its network address, and at the same time uniformly distributes the peers in the key-space. Lastly, we present our initial evaluation of the system, comprising a 255 peer overlay running on a local cluster. We describe our methodology and show that the overhead of our secure implementation is quite reasonable.
The challenge of securing peer-to-peer systems has been around since their advent. Sit and Morris @cite_2 first identified a set of design principles for securing peer-to-peer systems and described a taxonomy of various attacks against them. This work was extended by Wallach @cite_3 who investigated the security aspects of systems such as CAN @cite_13 , Chord @cite_9 , Pastry @cite_11 , and Tapestry @cite_0 , and discussed issues such as key assignment, routing, and excommunication of malicious peers.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_0", "@cite_2", "@cite_13", "@cite_11" ], "mid": [ "2118428193", "1585819637", "1650675509", "105601597", "2163059190", "2167898414" ], "abstract": [ "A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.", "Peer-to-peer (p2p) networking technologies have gained popularity as a mechanism for users to share files without the need for centralized servers. A p2p network provides a scalable and fault-tolerant mechanism to locate nodes anywhere on a network without maintaining a large amount of routing state. This allows for a variety of applications beyond simple file sharing. Examples include multicast systems, anonymous communications systems, and web caches. We survey security issues that occur in the underlying p2p routing protocols, as well as fairness and trust issues that occur in file sharing and other p2p applications.We discuss how techniques, ranging from cryptography, to random network probing, to economic incentives, can be used to address these problems.", "In today’s chaotic network, data and services are mobile and replicated widely for availability, durability, and locality. Components within this infrastructure interact in rich and complex ways, greatly stressing traditional approaches to name service and routing. This paper explores an alternative to traditional approaches called Tapestry. Tapestry is an overlay location and routing infrastructure that provides location-independent routing of messages directly to the closest copy of an object or service using only point-to-point links and without centralized resources. The routing and directory information within this infrastructure is purely soft state and easily repaired. Tapestry is self-administering, faulttolerant, and resilient under load. This paper presents the architecture and algorithms of Tapestry and explores their advantages through a number of experiments.", "Recent peer-to-peer research has focused on providing efficient hash lookup systems that can be used to build more complex systems. These systems have good properties when their algorithms are executed correctly but have not generally considered how to handle misbehaving nodes. This paper looks at what sorts of security problems are inherent in large peer-to-peer systems based on distributed hash lookup systems. We examine the types of problems that such systems might face, drawing examples from existing systems, and propose some design principles for detecting and preventing these problems.", "Hash tables - which map \"keys\" onto \"values\" - are an essential building block in modern software systems. We believe a similar functionality would be equally valuable to large distributed systems. In this paper, we introduce the concept of a Content-Addressable Network (CAN) as a distributed infrastructure that provides hash table-like functionality on Internet-like scales. The CAN is scalable, fault-tolerant and completely self-organizing, and we demonstrate its scalability, robustness and low-latency properties through simulation.", "This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer ap- plications. Pastry performs application-level routing and object location in a po- tentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops. Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties." ] }
0808.1744
2949844345
The Trinity (, 2007) spam classification system is based on a distributed hash table that is implemented using a structured peer-to-peer overlay. Such an overlay must be capable of processing hundreds of messages per second, and must be able to route messages to their destination even in the presence of failures and malicious peers that misroute packets or inject fraudulent routing information into the system. Typically there is tension between the requirements to route messages securely and efficiently in the overlay. We describe a secure and efficient routing extension that we developed within the I3 ( 2004) implementation of the Chord ( 2001) overlay. Secure routing is accomplished through several complementary approaches: First, peers in close proximity form overlapping groups that police themselves to identify and mitigate fraudulent routing information. Second, a form of random routing solves the problem of entire packet flows passing through a malicious peer. Third, a message authentication mechanism links each message to it sender, preventing spoofing. Fourth, each peer's identifier links the peer to its network address, and at the same time uniformly distributes the peers in the key-space. Lastly, we present our initial evaluation of the system, comprising a 255 peer overlay running on a local cluster. We describe our methodology and show that the overhead of our secure implementation is quite reasonable.
@cite_7 proposed several approaches to securing peer-to-peer overlays. They proposed to delegate assignment of keys to trusted certification authorities, that would ensure that the keys are chosen at random, and that each peer is bound to a unique key, with the peer's IP embedded in the key. To securely route messages, they proposed to use constrained routing tables, which contain keys from specific locations in the overlay. In our case Chord already constrains a key's location within the overlay, obviating the need for constrained routing tables. In fact, our self-policing and random routing mechanisms leverage this constraint.
{ "cite_N": [ "@cite_7" ], "mid": [ "2171957559" ], "abstract": [ "Structured peer-to-peer overlay networks provide a substrate for the construction of large-scale, decentralized applications, including distributed storage, group communication, and content distribution. These overlays are highly resilient; they can route messages correctly even when a large fraction of the nodes crash or the network partitions. But current overlays are not secure; even a small fraction of malicious nodes can prevent correct message delivery throughout the overlay. This problem is particularly serious in open peer-to-peer systems, where many diverse, autonomous parties without preexisting trust relationships wish to pool their resources. This paper studies attacks aimed at preventing correct message delivery in structured peer-to-peer overlays and presents defenses to these attacks. We describe and evaluate techniques that allow nodes to join the overlay, to maintain routing state, and to forward messages securely in the presence of malicious nodes." ] }
0808.1744
2949844345
The Trinity (, 2007) spam classification system is based on a distributed hash table that is implemented using a structured peer-to-peer overlay. Such an overlay must be capable of processing hundreds of messages per second, and must be able to route messages to their destination even in the presence of failures and malicious peers that misroute packets or inject fraudulent routing information into the system. Typically there is tension between the requirements to route messages securely and efficiently in the overlay. We describe a secure and efficient routing extension that we developed within the I3 ( 2004) implementation of the Chord ( 2001) overlay. Secure routing is accomplished through several complementary approaches: First, peers in close proximity form overlapping groups that police themselves to identify and mitigate fraudulent routing information. Second, a form of random routing solves the problem of entire packet flows passing through a malicious peer. Third, a message authentication mechanism links each message to it sender, preventing spoofing. Fourth, each peer's identifier links the peer to its network address, and at the same time uniformly distributes the peers in the key-space. Lastly, we present our initial evaluation of the system, comprising a 255 peer overlay running on a local cluster. We describe our methodology and show that the overhead of our secure implementation is quite reasonable.
@cite_7 also proposed a routing failure test that tries to determine what nodes are malicious. Their approach also sends multiple copies of the message through diverse routes to ensure message delivery. Our approach is similar but less resource intensive. Our system uses the peer groups to detect faulty routing information, and to ensure that no peer is a choke-point between two other peers. Our system does not attempt to ensure the delivery of all messages, but instead attempts to ensure that some messages will be delivered.
{ "cite_N": [ "@cite_7" ], "mid": [ "2171957559" ], "abstract": [ "Structured peer-to-peer overlay networks provide a substrate for the construction of large-scale, decentralized applications, including distributed storage, group communication, and content distribution. These overlays are highly resilient; they can route messages correctly even when a large fraction of the nodes crash or the network partitions. But current overlays are not secure; even a small fraction of malicious nodes can prevent correct message delivery throughout the overlay. This problem is particularly serious in open peer-to-peer systems, where many diverse, autonomous parties without preexisting trust relationships wish to pool their resources. This paper studies attacks aimed at preventing correct message delivery in structured peer-to-peer overlays and presents defenses to these attacks. We describe and evaluate techniques that allow nodes to join the overlay, to maintain routing state, and to forward messages securely in the presence of malicious nodes." ] }
0808.1744
2949844345
The Trinity (, 2007) spam classification system is based on a distributed hash table that is implemented using a structured peer-to-peer overlay. Such an overlay must be capable of processing hundreds of messages per second, and must be able to route messages to their destination even in the presence of failures and malicious peers that misroute packets or inject fraudulent routing information into the system. Typically there is tension between the requirements to route messages securely and efficiently in the overlay. We describe a secure and efficient routing extension that we developed within the I3 ( 2004) implementation of the Chord ( 2001) overlay. Secure routing is accomplished through several complementary approaches: First, peers in close proximity form overlapping groups that police themselves to identify and mitigate fraudulent routing information. Second, a form of random routing solves the problem of entire packet flows passing through a malicious peer. Third, a message authentication mechanism links each message to it sender, preventing spoofing. Fourth, each peer's identifier links the peer to its network address, and at the same time uniformly distributes the peers in the key-space. Lastly, we present our initial evaluation of the system, comprising a 255 peer overlay running on a local cluster. We describe our methodology and show that the overhead of our secure implementation is quite reasonable.
Lastly, there are many ways to secure a peer-to-peer system, for example LOCKSS @cite_4 uses majority voting replicas and computationally rate-limiting cryptographic puzzles @cite_6 . Unfortunately, these approaches severely impact system performance and are not practical in the context where good performance is a necessity.
{ "cite_N": [ "@cite_4", "@cite_6" ], "mid": [ "2950945875", "1601379374" ], "abstract": [ "The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in opinion polls.'' Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection in new ways that ensure even an adversary capable of unlimited effort over decades has only a small probability of causing irrecoverable damage before being detected.", "We present a computational technique for combatting junk mail in particular and controlling access to a shared resource in general. The main idea is to require a user to compute a moderately hard, but not intractable, function in order to gain access to the resource, thus preventing frivolous use. To this end we suggest several pricing Junctions, based on, respectively, extracting square roots modulo a prime, the Fiat-Shamir signature scheme, and the Ong-Schnorr-Shamir (cracked) signature scheme." ] }
0808.2530
2156119110
We consider the problem of designing a fair scheduling algorithm for discrete-time constrained queuing networks. Each queue has dedicated exogenous packet arrivals. There are constraints on which queues can be served simultaneously. This model effectively describes important special instances like network switches, interference in wireless networks, bandwidth sharing for congestion control and traffic scheduling in road roundabouts. Fair scheduling is required because it provides isolation to different traffic flows; isolation makes the system more robust and enables providing quality of service. Existing work on fairness for constrained networks concentrates on flow based fairness. As a main result, we describe a notion of packet based fairness by establishing an analogy with the ranked election problem: packets are voters, schedules are candidates, and each packet ranks the schedules based on its priorities. We then obtain a scheduling algorithm that achieves the described notion of fairness by drawing upon the seminal work of Goodman and Markowitz (1952). This yields the familiar Maximum Weight (MW) style algorithm. As another important result, we prove that the algorithm obtained is throughput optimal. There is no reason a priori why this should be true, and the proof requires nontraditional methods.
To address the issue of fairness in a network, Kelly, Maullo and Tan (1998) @cite_3 proposed a flow-level model for the Internet. Under this model, the resource allocation that maximizes the global network utility provides a notion of fair rate allocation. We refer an interested reader to survey-style papers by Low (2003) @cite_16 and Chuang et.al. (2006) @cite_9 and the book by Srikant (2004) @cite_28 for further details. We take a note of desirable throughput property of the dynamic flow-level resource allocation model (see for example, Bonald and Massoulie (2001) @cite_13 , and de Veciana, Konstantopoulos and Lee (2001) @cite_11 ). This approach, though valid for a general network with arbitrary topology, does not take scheduling constraints into account.
{ "cite_N": [ "@cite_28", "@cite_9", "@cite_3", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "1572996156", "2158893758", "2159715570", "2126956578", "2161809383", "2166933924" ], "abstract": [ "Preface Introduction Resource Allocation Congestion Control: A Decentralized Solution Relationship to Current Internet Protocols Linear Analysis with Delay: The Single Link Case Linear Analysis with Delay: The Network Case Global Stability for a Single Link and Single Flow Stochastic Models and Their Deterministic Limits Connection-level Models Real-Time Sources and Distributed Admission Control Conclusions References Index", "Network protocols in layered architectures have historically been obtained on an ad hoc basis, and many of the recent cross-layer designs are also conducted through piecemeal approaches. Network protocol stacks may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems. This paper presents a survey of the recent efforts towards a systematic understanding of layering as optimization decomposition, where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. There can be many alternative decompositions, leading to a choice of different layering architectures. This paper surveys the current status of horizontal decomposition into distributed computation, and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and channel coding. Key messages and methods arising from many recent works are summarized, and open issues discussed. Through case studies, it is illustrated how layering as Optimization Decomposition provides a common language to think about modularization in the face of complex, networked interactions, a unifying, top-down approach to design protocol stacks, and a mathematical theory of network architectures", "This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks. The algorithms provide natural generalisations to large-scale networks of simple additive increase multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion. Stability is established by showing that, with an appropriate formulation of the overall optimisation problem, the network's implicit objective function provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The network's optimisation problem may be cast in primal or dual form: this leads naturally to two classes of algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit rates based on shadow prices. Both classes of algorithm may be generalised to include routing control, and provide natural implementations of proportionally fair pricing.", "We propose a duality model of end-to-end congestion control and apply it to understanding the equilibrium properties of TCP and active queue management schemes. The basic idea is to regard source rates as primal variables and congestion measures as dual variables, and congestion control as a distributed primal-dual algorithm over the Internet to maximize aggregate utility subject to capacity constraints. The primal iteration is carried out by TCP algorithms such as Reno or Vegas, and the dual iteration is carried out by queue management algorithms such as DropTail, RED or REM. We present these algorithms and their generalizations, derive their utility functions, and study their interaction.", "We discuss the relevance of fairness as a design objective for congestion control mechanisms in the Internet. Specifically, we consider a backbone network shared by a dynamic number of short-lived flows, and study the impact of bandwidth sharing on network performance. In particular, we prove that for a broad class of fair bandwidth allocations, the total number of flows in progress remains finite if the load of every link is less than one. We also show that provided the bandwidth allocation is \"sufficiently\" fair, performance is optimal in the sense that the throughput of the flows is mainly determined by their access rate. Neither property is guaranteed with unfair bandwidth allocations, when priority is given to one class of flow with respect to another. This suggests current proposals for a differentiated services Internet may lead to suboptimal utilization of network resources.", "We consider the stability and performance of a model for networks supporting services that adapt their transmission to the available bandwidth. Not unlike real networks, in our model, connection arrivals are stochastic, each has a random amount of data to send, and the number of ongoing connections in the system changes over time. Consequently, the bandwidth allocated to, or throughput achieved by, a given connection may change during its lifetime as feedback control mechanisms react to network loads. Ideally, if there were a fixed number of ongoing connections, such feedback mechanisms would reach an equilibrium bandwidth allocation typically characterized in terms of its \"fairness\" to users, e.g., max-min or proportionally fair. We prove the stability of such networks when the offered load on each link does not exceed its capacity. We use simulation to investigate performance, in terms of average connection delays, for various fairness criteria. Finally, we pose an architectural problem in TCP IPs decoupling of the transport and network layer from the point of view of guaranteeing connection-level stability, which we claim may explain congestion phenomena on the Internet." ] }
0808.2869
1614033707
The standard definition of quantum state randomization, which is the quantum analog of the classical one-time pad, consists in applying some transformation to the quantum message conditioned on a classical secret key k. We investigate encryption schemes in which this transformation is conditioned on a quantum encryption key state ρ k instead of a classical string, and extend this symmetric-key scheme to an asymmetric-key model in which copies of the same encryption key ρ k may be held by several different people, but maintaining information-theoretical security. We find bounds on the message size and the number of copies of the encryption key which can be safely created in these two models in terms of the entropy of the decryption key, and show that the optimal bound can be asymptotically reached by a scheme using classical encryption keys. This means that the use of quantum states as encryption keys does not allow more of these to be created and shared, nor encrypt larger messages, than if these keys are purely classical.
Quantum one-time pads were first proposed in @cite_5 @cite_0 for perfect security, then approximate security was considered in, e.g., @cite_9 @cite_1 @cite_7 . All these schemes assume the sender and receiver share some secret classical string which is used only once to perform the encryption. We extend these models in the symmetric-key case by conditioning the encryption operation on a quantum key and considering security with multiple uses of the same key, and then in the asymmetric-key case by considering security with multiple users holding copies of the same encryption key.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_1", "@cite_0", "@cite_5" ], "mid": [ "1597310767", "1980744208", "2103486884", "1626954859", "2043414726" ], "abstract": [ "Randomization of quantum states is the quantum analogue of the classical one‐time pad. We present an improved, efficient construction of an approximately randomizing map that uses O(d e2) Pauli operators to map any d‐dimensional state to a state that is within trace distance e of the completely mixed state. Our bound is a log d factor smaller than that of Hayden, Leung, Shor, and Winter, and Ambainis and Smith.Then, we show that a random sequence of essentially the same number of unitary operators, chosen from an appropriate set, with high probability form an approximately randomizing map for d‐dimensional states. Finally, we discuss the optimality of these schemes via connections to different notions of pseudorandomness, and give a new lower bound for small e.", "The construction of a perfectly secure private quantum channel in dimension d is known to require 2 log d shared random key bits between the sender and receiver. We show that if only near-perfect security is required, the size of the key can be reduced by a factor of two. More specifically, we show that there exists a set of roughly d log d unitary operators whose average effect on every input pure state is almost perfectly randomizing, as compared to the d2 operators required to randomize perfectly. Aside from the private quantum channel, variations of this construction can be applied to many other tasks in quantum information processing. We show, for instance, that it can be used to construct LOCC data hiding schemes for bits and qubits that are much more efficient than any others known, allowing roughly log d qubits to be hidden in 2 log d qubits. The method can also be used to exhibit the existence of quantum states with locked classical correlations, an arbitrarily large amplification of the correlation being accomplished by sending a negligibly small classical key. Our construction also provides the basic building block for a method of remotely preparing arbitrary d-dimensional pure quantum states using approximately log d bits of communication and log d ebits of entanglement.", "A quantum encryption scheme (also called private quantum channel, or state randomization protocol) is a one-time pad for quantum messages. If two parties share a classical random string, one of them can transmit a quantum state to the other so that an eavesdropper gets little or no information about the state being transmitted. Perfect encryption schemes leak no information at all about the message. Approximate encryption schemes leak a non-zero (though small) amount of information but require a shorter shared random key. Approximate schemes with short keys have been shown to have a number of applications in quantum cryptography and information theory [8].", "We investigate how a classical private key can be used by two players, connected by an insecure one-way quantum channel, to perform private communication of quantum information. In particular we show that in order to transmit n qubits privately, 2n bits of shared private key are necessary and sufficient. This result may be viewed as the quantum analogue of the classical one-time pad encryption scheme. From the point of view of the eavesdropper, this encryption process can be seen as a randomization of the original state. We thus also obtain strict bounds on the amount of entropy necessary for randomizing n qubits.", "We show that 2n random classical bits are both necessary and sufficient for encrypting any unknown state of n quantum bits in an informationally secure manner. We also characterize the complete set of optimal protocols in terms of a set of unitary operations that comprise an orthonormal basis in a canonical inner product space. Moreover, a connection is made between quantum encryption and quantum teleportation that allows for a different proof of optimality of teleportation." ] }
0808.2869
1614033707
The standard definition of quantum state randomization, which is the quantum analog of the classical one-time pad, consists in applying some transformation to the quantum message conditioned on a classical secret key k. We investigate encryption schemes in which this transformation is conditioned on a quantum encryption key state ρ k instead of a classical string, and extend this symmetric-key scheme to an asymmetric-key model in which copies of the same encryption key ρ k may be held by several different people, but maintaining information-theoretical security. We find bounds on the message size and the number of copies of the encryption key which can be safely created in these two models in terms of the entropy of the decryption key, and show that the optimal bound can be asymptotically reached by a scheme using classical encryption keys. This means that the use of quantum states as encryption keys does not allow more of these to be created and shared, nor encrypt larger messages, than if these keys are purely classical.
The first scheme using quantum keys in an asymmetric-key model was proposed by @cite_4 , although they considered the restricted scenario of classical messages. Their scheme can encrypt a @math bit classical message, and their security proof is computational, as it reduces the task of breaking the scheme to a graph automorphism problem. They extended their scheme to a multi-bit version @cite_10 , but without security proof. @cite_3 then gave an information-theoretical security proof for @cite_10 . The quantum asymmetric-key model we consider is a generalization and extension of that of @cite_4 @cite_10 .
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_3" ], "mid": [ "1948799478", "1948799478", "1897057659" ], "abstract": [ "We introduce a problem of distinguishing between two quantum states as a new underlying problem to build a computational cryptographic scheme that is ”secure” against quantum adversary. Our problem is a natural generalization of the distinguishability problem between two probability distributions, which are commonly used in computational cryptography. More precisely, our problem QSCDff is the computational distinguishability problem between two types of random coset states with a hidden permutation over the symmetric group. We show that (i) QSCDff has the trapdoor property; (ii) the average-case hardness of QSCDff coincides with its worst-case hardness; and (iii) QSCDff is at least as hard in the worst case as the graph automorphism problem. Moreover, we show that QSCDff cannot be efficiently solved by any quantum algorithm that naturally extends Shor's factorization algorithm. These cryptographic properties of QSCDff enable us to construct a public-key cryptosystem, which is likely to withstand any attack of a polynomial-time quantum adversary.", "We introduce a problem of distinguishing between two quantum states as a new underlying problem to build a computational cryptographic scheme that is ”secure” against quantum adversary. Our problem is a natural generalization of the distinguishability problem between two probability distributions, which are commonly used in computational cryptography. More precisely, our problem QSCDff is the computational distinguishability problem between two types of random coset states with a hidden permutation over the symmetric group. We show that (i) QSCDff has the trapdoor property; (ii) the average-case hardness of QSCDff coincides with its worst-case hardness; and (iii) QSCDff is at least as hard in the worst case as the graph automorphism problem. Moreover, we show that QSCDff cannot be efficiently solved by any quantum algorithm that naturally extends Shor's factorization algorithm. These cryptographic properties of QSCDff enable us to construct a public-key cryptosystem, which is likely to withstand any attack of a polynomial-time quantum adversary.", "One of the central issues in the hidden subgroup problem is to bound the sample complexity, i.e., the number of identical samples of coset states sufficient and necessary to solve the problem. In this paper, we present general bounds for the sample complexity of the identification and decision versions of the hidden subgroup problem. As a consequence of the bounds, we show that the sample complexity for both of the decision and identification versions is Θ(log |H| log p) for a candidate set H of hidden subgroups in the case where the candidate nontrivial subgroups have the same prime order p, which implies that the decision version is at least as hard as the identification version in this case. In particular, it does so for the important cases such as the dihedral and the symmetric hidden subgroup problems. Moreover, the upper bound of the identification is attained by a variant of the pretty good measurement. This implies that the concept of the pretty good measurement is quite useful for identification of hidden subgroups over an arbitrary group with optimal sample complexity." ] }
0808.1207
1622004134
Searching in P2P networks is fundamental to all overlay networks. P2P networks based on Distributed Hash Tables (DHT) are optimized for single key lookups, whereas unstructured networks offer more complex queries at the cost of increased traffic and uncertain success rates. Our Distributed Tree Construction (DTC) approach enables structured P2P networks to perform prefix search, range queries, and multicast in an optimal way. It achieves this by creating a spanning tree over the peers in the search area, using only information available locally on each peer. Because DTC creates a spanning tree, it can query all the peers in the search area with a minimal number of messages. Furthermore, we show that the tree depth has the same upper bound as a regular DHT lookup which in turn guarantees fast and responsive runtime behavior. By placing objects with a region quadtree, we can perform a prefix search or a range query in a freely selectable area of the DHT. Our DTC algorithm is DHT-agnostic and works with most existing DHTs. We evaluate the performance of DTC over several DHTs by comparing the performance to existing application-level multicast solutions, we show that DTC sends 30–250 fewer messages than common solutions.
Flooding approaches for P2P networks in general have been extensively investigated @cite_2 @cite_10 @cite_25 . While simple flooding approaches may apply to unstructured networks, DHT networks can take advantage of the structured neighbor lists and reduce the number of duplicate messages.
{ "cite_N": [ "@cite_10", "@cite_25", "@cite_2" ], "mid": [ "2169047226", "2118572087", "" ], "abstract": [ "Napster pioneered the idea of peer-to-peer file sharing, and supported it with a centralized file search facility. Subsequent P2P systems like Gnutella adopted decentralized search algorithms. However, Gnutella's notoriously poor scaling led some to propose distributed hash table solutions to the wide-area file search problem. Contrary to that trend, we advocate retaining Gnutella's simplicity while proposing new mechanisms that greatly improve its scalability. Building upon prior research [1, 12, 22], we propose several modifications to Gnutella's design that dynamically adapt the overlay topology and the search algorithms in order to accommodate the natural heterogeneity present in most peer-to-peer systems. We test our design through simulations and the results show three to five orders of magnitude improvement in total system capacity. We also report on a prototype implementation and its deployment on a testbed.", "Peer-to-peer systems promise inexpensive scalability, adaptability, and robustness. Thus, they are an attractive platform for file sharing, distributed wikis, and search engines. These applications often store weakly structured data, requiring sophisticated search algorithms. To simplify the search problem, most scalable algorithms introduce structure to the network. However, churn or violent disruption may break this structure, compromising search guarantees. This paper proposes a simple probabilistic search system, BubbleStorm, built on random multigraphs. Our primary contribution is a flexible and reliable strategy for performing exhaustive search. BubbleStorm also exploits the heterogeneous bandwidth of peers. However, we sacrifice some of this bandwidth for high parallelism and low latency. The provided search guarantees are tunable, with success probability adjustable well into the realm of reliable systems. For validation, we simulate a network with one million low-end peers and show BubbleStorm handles up to 90 simultaneous peer departure and 50 simultaneous crash.", "" ] }
0808.1207
1622004134
Searching in P2P networks is fundamental to all overlay networks. P2P networks based on Distributed Hash Tables (DHT) are optimized for single key lookups, whereas unstructured networks offer more complex queries at the cost of increased traffic and uncertain success rates. Our Distributed Tree Construction (DTC) approach enables structured P2P networks to perform prefix search, range queries, and multicast in an optimal way. It achieves this by creating a spanning tree over the peers in the search area, using only information available locally on each peer. Because DTC creates a spanning tree, it can query all the peers in the search area with a minimal number of messages. Furthermore, we show that the tree depth has the same upper bound as a regular DHT lookup which in turn guarantees fast and responsive runtime behavior. By placing objects with a region quadtree, we can perform a prefix search or a range query in a freely selectable area of the DHT. Our DTC algorithm is DHT-agnostic and works with most existing DHTs. We evaluate the performance of DTC over several DHTs by comparing the performance to existing application-level multicast solutions, we show that DTC sends 30–250 fewer messages than common solutions.
In the case of CAN, work by @cite_13 implemented an application-level multicast on top of CAN. As our evaluation shows, this approach can have a significant overhead and even in the best case, will have an overhead of about 32 , the overhead is a function of the network size and CAN dimensions and in smaller search areas, the overhead can grow considerably higher. An improved version of ALM is described in @cite_1 . They reduce duplicate messages, but duplicates may still occur, especially in the case of uneven zone sizes within the CAN overlay.
{ "cite_N": [ "@cite_1", "@cite_13" ], "mid": [ "2129235628", "2166454918" ], "abstract": [ "Structured peer-to-peer overlay networks such as CAN, Chord, Pastry, and Tapestry can be used to implement Internet-scale application-level multicast. There are two general approaches to accomplishing this: tree building and flooding. This paper evaluates these two approaches using two different types of structured overlay: 1) overlays which use a form of generalized hypercube routing, e.g., Chord, Pastry and Tapestry, and 2) overlays which use a numerical distance metric to route through a Cartesian hyperspace, e.g., CAN. Pastry and CAN are chosen as the representatives of each type of overlay. To the best of our knowledge, this paper reports the first head-to-head comparison of CAN-style versus Pastry-style overlay networks, using multicast communication workloads running on an identical simulation infrastructure. The two approaches to multicast are independent of overlay network choice, and we provide a comparison of flooding versus tree-based multicast on both overlays. Results show that the tree-based approach consistently outperforms the flooding approach. Finally, for tree-based multicast, we show that Pastry provides better performance than CAN.", "Most currently proposed solutions to application-level multicast organise the group members into an application-level mesh over which a Distance-Vector routing protocol, or a similar algorithm, is used to construct source-rooted distribution trees. The use of a global routing protocol limits the scalability of these systems. Other proposed solutions that scale to larger numbers of receivers do so by restricting the multicast service model to be single-sourced. In this paper, we propose an application-level multicast scheme capable of scaling to large group sizes without restricting the service model to a single source. Our scheme builds on recent work on Content-Addressable Networks (CANs). Extending the CAN framework to support multicast comes at trivial additional cost and, because of the structured nature of CAN topologies, obviates the need for a multicast routingalg orithm. Given the deployment of a distributed infrastructure such as a CAN, we believe our CAN-based multicast scheme offers the dual advantages of simplicity and scalability." ] }
0808.1207
1622004134
Searching in P2P networks is fundamental to all overlay networks. P2P networks based on Distributed Hash Tables (DHT) are optimized for single key lookups, whereas unstructured networks offer more complex queries at the cost of increased traffic and uncertain success rates. Our Distributed Tree Construction (DTC) approach enables structured P2P networks to perform prefix search, range queries, and multicast in an optimal way. It achieves this by creating a spanning tree over the peers in the search area, using only information available locally on each peer. Because DTC creates a spanning tree, it can query all the peers in the search area with a minimal number of messages. Furthermore, we show that the tree depth has the same upper bound as a regular DHT lookup which in turn guarantees fast and responsive runtime behavior. By placing objects with a region quadtree, we can perform a prefix search or a range query in a freely selectable area of the DHT. Our DTC algorithm is DHT-agnostic and works with most existing DHTs. We evaluate the performance of DTC over several DHTs by comparing the performance to existing application-level multicast solutions, we show that DTC sends 30–250 fewer messages than common solutions.
Applying multi-attribute range queriers to current DHTs has been done by choosing Hubs @cite_15 . The idea of Hubs, which are responsible for one attribute each is independent form the underlying DHT. A hub can be seen as a separate overlay for each attribute. Our prefix and multicast approach can also be applied to hubs which would lead to an optimized ID space for each attribute. On the one hand less nodes would need to be queried, on the other hand a separate overlay network needs to be maintained for each attribute.
{ "cite_N": [ "@cite_15" ], "mid": [ "2096538410" ], "abstract": [ "This paper presents the design of Mercury, a scalable protocol for supporting multi-attribute range-based searches. Mercury differs from previous range-based query systems in that it supports multiple attributes as well as performs explicit load balancing. To guarantee efficient routing and load balancing, Mercury uses novel light-weight sampling mechanisms for uniformly sampling random nodes in a highly dynamic overlay network. Our evaluation shows that Mercury is able to achieve its goals of logarithmic-hop routing and near-uniform load balancing.We also show that Mercury can be used to solve a key problem for an important class of distributed applications: distributed state maintenance for distributed games. We show that the Mercury-based solution is easy to use, and that it reduces the game's messaging overheard significantly compared to a naive approach." ] }
0808.1522
2099785642
We construct a homeomorphism between the compact regular locale of integrals on a Riesz space and the locale of measures(valuations) on its spectrum. In fact, we construct two geometric theories and show that they are biinterpretable. The constructions are elementary and tightly connected to the Riesz space structure. 2000 Mathematics Subject Classification 06D22, 28C05
Vickers @cite_19 presents another variant of the Riesz representation theorem. His construction works for locales which are not necessarily compact completely regular. However, his integrals have their values in the lower (or upper) reals, as opposed to the Dedekind reals. A locale of valuations was first presented by Heckman @cite_7 .
{ "cite_N": [ "@cite_19", "@cite_7" ], "mid": [ "2144063313", "1565369011" ], "abstract": [ "An account of lower and upper integration is given. It is constructive in the sense of geometric logic. If the integrand takes its values in the nonnegative lower reals, then its lower integral with respect to a valuation is a lower real. If the integrand takes its values in the non-negative upper reals, then its upper integral with respect to a covaluation and with domain of integration bounded by a compact subspace is an upper real. Spaces of valuations and of covaluations are deflned. Riemann and Choquet integrals can be calculated in terms of these lower and upper integrals.", "The probabilistic power domain construction of Jones and Plotkin [6, 7] is defined by a construction on dcpo's. We present alternative definitions in terms of information systems a la Vickers [12], and in terms of locales. On continuous domains, all three definitions coincide." ] }
0807.4580
1937990742
MEMS storage devices are new non-volatile secondary storages that have outstanding advantages over magnetic disks. MEMS storage devices, however, are much different from magnetic disks in the structure and access characteristics in the following ways. They have thousands of heads called probe tips and provide the following two major access facilities: (1) flexibility: freely selecting a set of probe tips for accessing data, (2) parallelism: simultaneously reading and writing data with the set of probe tips selected. Due to these characteristics, it is nontrivial to find data placements that fully utilize the capability of MEMS storage devices. In this paper, we propose a simple logical model called the Region-Sector (RS) model that abstracts major characteristics affecting data retrieval performance, such as flexibility and parallelism, from the physical MEMS storage model. We also suggest heuristic data placement strategies based on the RS model. To show the usability of the RS model, we derive new data placements for relational data and two-dimensional spatial data by using these strategies. Experimental results show that the proposed data placements improve the data retrieval performance by up to 4.7times for relational data and by up to 18.7times for two-dimensional spatial data of approximately 320Mbytes compared with those of existing data placements. Further, these improvements are expected to be more marked as the database size grows.
There have been a number of studies on data placement for the MEMS storage device. We classify them into two categories -- disk mapping approaches and device-specific approaches -- depending on whether they take advantage of the characteristics of the storage device. This classification of the MEMS storage device is analogous to that of the flash memory , @cite_4 , which is another type of new non-volatile secondary storage. For the flash memory, device-specific approaches ,(e.g., Yet Another Flash File System ,( YAFFS ) , @cite_9 ) provide new mechanisms to exploit the features of the flash memory in order to improve performance, while disk mapping approaches(e.g., Flash Translation Layer ,( FTL ) , @cite_18 ) abstract the flash memory as a linear array of fixed-size pages in order to use existing disk-based algorithms on the flash memory. In this section, we explain two categories for the MEMS storage device in more detail.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_18" ], "mid": [ "59502695", "2099753358", "" ], "abstract": [ "A channel-shaped structure is shown in which a channel-shaped met al carrier has a covering of flexible material which is arranged to define a gripping rib running along and protruding from one inside wall of the channel. Met al reinforcement at least partly extends into the gripping rib. Also disclosed is a met al carrier comprising a series of U-shaped met al elements which are spaced from each other and are each provided with a corrugation which runs around the U from the end of one leg of the U to the end of the other leg thereof.", "Flash memory is a type of electrically-erasable programmable read-only memory (EEPROM). Because flash memories are nonvolatile and relatively dense, they are now used to store files and other persistent objects in handheld computers, mobile phones, digital cameras, portable music players, and many other computer systems in which magnetic disks are inappropriate. Flash, like earlier EEPROM devices, suffers from two limitations. First, bits can only be cleared by erasing a large block of memory. Second, each block can only sustain a limited number of erasures, after which it can no longer reliably store data. Due to these limitations, sophisticated data structures and algorithms are required to effectively use flash memories. These algorithms and data structures support efficient not-in-place updates of data, reduce the number of erasures, and level the wear of the blocks in the device. This survey presents these algorithms and data structures, many of which have only been described in patents until now.", "" ] }
0807.4580
1937990742
MEMS storage devices are new non-volatile secondary storages that have outstanding advantages over magnetic disks. MEMS storage devices, however, are much different from magnetic disks in the structure and access characteristics in the following ways. They have thousands of heads called probe tips and provide the following two major access facilities: (1) flexibility: freely selecting a set of probe tips for accessing data, (2) parallelism: simultaneously reading and writing data with the set of probe tips selected. Due to these characteristics, it is nontrivial to find data placements that fully utilize the capability of MEMS storage devices. In this paper, we propose a simple logical model called the Region-Sector (RS) model that abstracts major characteristics affecting data retrieval performance, such as flexibility and parallelism, from the physical MEMS storage model. We also suggest heuristic data placement strategies based on the RS model. To show the usability of the RS model, we derive new data placements for relational data and two-dimensional spatial data by using these strategies. Experimental results show that the proposed data placements improve the data retrieval performance by up to 4.7times for relational data and by up to 18.7times for two-dimensional spatial data of approximately 320Mbytes compared with those of existing data placements. Further, these improvements are expected to be more marked as the database size grows.
, @cite_16 and , @cite_11 proposed models to use the MEMS storage device just like a disk. They abstract the MEMS storage device as a linear array of fixed-size logical blocks with one head. This linear abstraction works well for most applications using the MEMS storage device as the replacement of the disk , @cite_16 . However, they provide relatively poor data retrieval performance compared with device-specific approaches , @cite_0 @cite_2 because they do not take full advantage of the characteristics of the MEMS storage device , @cite_13 .
{ "cite_N": [ "@cite_0", "@cite_2", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2086545850", "2072618081", "2007035212", "1880323110", "1585713974" ], "abstract": [ "Due to the large difference between seek time and transfer time in current disk technology, it is advantageous to perform large I O using a single sequential access rather than multiple small random I O accesses. However, prior optimal cost and data placement approaches for processing range queries over two-dimensional datasets do not consider this property. In particular, these techniques do not consider the issue of sequential data placement when multiple I O blocks need to be retrieved from a single device. In this paper, we reevaluate the optimal cost of range queries by declustering two-dimensional datasets over multiple devices, and prove that, in general, it is impossible to achieve the new optimal cost. This is because disks cannot facilitate two-dimensional sequential access which is required by the new optimal cost. Then we revisit the existing data allocation schemes under the new optimal cost, and show that none of them can achieve the new optimal cost. Fortunately, MEMS-based storage is being developed to reduce I O cost. We first show that the two-dimensional sequential access requirement can not be satisfied by simply modeling MEMS-based storage as conventional disks. Then we propose a new placement scheme that exploits the physical properties of MEMS-based storage to solve this problem. Our theoretical analysis and experimental results show that the new scheme achieves almost optimal I O costs.", "Due to recent advances in semiconductor manufacturing, the gap between main memory and disks is constantly increasing. This leads to a significant performance bottleneck for Relational Database Management Systems. Recent advances in nanotechnology have led to the invention of MicroElectroMechanical Systems (MEMS) based storage technology to replace disks. In this paper, we exploit the physical characteristics of MEMS-based storage devices to develop a placement scheme for relational data that enables retrieval in both row-wise and column-wise manner. We develop algorithms for different relational operations based on this data layout. Our experimental results and analysis demonstrate that this data layout not only improves I O utilization, but results in better cache performance for a variety of different relational operations.", "MEMS-based storage devices promise significant performance, reliability, and power improvements relative to disk drives. This paper compares and contrasts these two storage technologies and explores how the physical characteristics of MEMS-based storage devices change four aspects of operating system (OS) management: request scheduling, data placement, failure management, and power conservation. Straightforward adaptations of existing disk request scheduling algorithms are found to be appropriate for MEMS-based storage devices. A new bipartite data placement scheme is shown to better match these devices' novel mechanical positioning characteristics. With aggressive internal redundancy, MEMS-based storage devices can mask and tolerate failure modes that halt operation or cause data loss for disks. In addition, MEMS-based storage devices simplify power management because the devices can be stopped and started rapidly.", "MEMS-based storage devices (MEMStores) are significantly different from both disk drives and semiconductor memories. The differences motivate the question of whether they need new abstractions to be utilized by systems, or if existing abstractions will be sufficient. This paper addresses this question by examining the fundamental reasons that the abstraction works for existing devices, and by showing that these reasons also hold for MEMStores. This result is shown to hold through several case studies of proposed roles MEMStores may take in future systems and potential policies that may be used to tailor systems' access to MEMStores. With one noted exception, today's storage interfaces and abstractions are as suitable for MEMStores as for disks.", "Probe-based storage, also known as micro-electric mechanical systems (MEMS) storage, is a new technology that is emerging to bypass the fundamental limitations of disk drives. The design space of such devices is particularly interesting because we can architect these devices to different design points, each with different performance characteristics. This makes it more difficult to understand how to use probe-based storage in a system. Although researchers have modeled access times and simulated performance of workloads, such simulations are time-intensive and make it difficult to exhaustively search the parameter space for optimal configurations. To address this problem, we have created a parameterized analytical model that computes the average request latency of a probe-based storage device. Our error compared to a simulated device using real-world traces is small (less than 15 for service time). With this model we can identify configurations that will satisfy specific performance objectives, greatly narrowing the search space of configurations one must simulate." ] }
0807.4580
1937990742
MEMS storage devices are new non-volatile secondary storages that have outstanding advantages over magnetic disks. MEMS storage devices, however, are much different from magnetic disks in the structure and access characteristics in the following ways. They have thousands of heads called probe tips and provide the following two major access facilities: (1) flexibility: freely selecting a set of probe tips for accessing data, (2) parallelism: simultaneously reading and writing data with the set of probe tips selected. Due to these characteristics, it is nontrivial to find data placements that fully utilize the capability of MEMS storage devices. In this paper, we propose a simple logical model called the Region-Sector (RS) model that abstracts major characteristics affecting data retrieval performance, such as flexibility and parallelism, from the physical MEMS storage model. We also suggest heuristic data placement strategies based on the RS model. To show the usability of the RS model, we derive new data placements for relational data and two-dimensional spatial data by using these strategies. Experimental results show that the proposed data placements improve the data retrieval performance by up to 4.7times for relational data and by up to 18.7times for two-dimensional spatial data of approximately 320Mbytes compared with those of existing data placements. Further, these improvements are expected to be more marked as the database size grows.
, @cite_0 @cite_2 proposed methods for placing data on the MEMS storage device based on data access patterns of applications. , @cite_2 places relational data on the MEMS storage device such that projection queries are performed efficiently. , @cite_0 places two-dimensional spatial data such that spatial range queries are performed efficiently. These data placements identify that data access patterns of such applications are inherently two-dimensional, and then, place data so as to take advantage of parallelism and flexibility of the MEMS storage device. We explain each data placement in more detail for comparing them with our methods in Section ,6.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2086545850", "2072618081" ], "abstract": [ "Due to the large difference between seek time and transfer time in current disk technology, it is advantageous to perform large I O using a single sequential access rather than multiple small random I O accesses. However, prior optimal cost and data placement approaches for processing range queries over two-dimensional datasets do not consider this property. In particular, these techniques do not consider the issue of sequential data placement when multiple I O blocks need to be retrieved from a single device. In this paper, we reevaluate the optimal cost of range queries by declustering two-dimensional datasets over multiple devices, and prove that, in general, it is impossible to achieve the new optimal cost. This is because disks cannot facilitate two-dimensional sequential access which is required by the new optimal cost. Then we revisit the existing data allocation schemes under the new optimal cost, and show that none of them can achieve the new optimal cost. Fortunately, MEMS-based storage is being developed to reduce I O cost. We first show that the two-dimensional sequential access requirement can not be satisfied by simply modeling MEMS-based storage as conventional disks. Then we propose a new placement scheme that exploits the physical properties of MEMS-based storage to solve this problem. Our theoretical analysis and experimental results show that the new scheme achieves almost optimal I O costs.", "Due to recent advances in semiconductor manufacturing, the gap between main memory and disks is constantly increasing. This leads to a significant performance bottleneck for Relational Database Management Systems. Recent advances in nanotechnology have led to the invention of MicroElectroMechanical Systems (MEMS) based storage technology to replace disks. In this paper, we exploit the physical characteristics of MEMS-based storage devices to develop a placement scheme for relational data that enables retrieval in both row-wise and column-wise manner. We develop algorithms for different relational operations based on this data layout. Our experimental results and analysis demonstrate that this data layout not only improves I O utilization, but results in better cache performance for a variety of different relational operations." ] }
0807.4154
2117593045
We present a protocol which allows a client to have a server carry out a quantum computation for her such that the client's inputs, outputs and computation remain perfectly private, and where she does not require any quantum computational power or memory. The client only needs to be able to prepare single qubits randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Our protocol is interactive: after the initial preparation of quantum states, the client and server use two-way classical communication which enables the client to drive the computation, giving single-qubit measurement instructions to the server, depending on previous measurement outcomes. Our protocol works for inputs and outputs that are either classical or quantum. We give an authentication protocol that allows the client to detect an interfering server; our scheme can also be made fault-tolerant. We also generalize our result to the setting of a purely classical client who communicates classically with two non-communicating entangled servers, in order to perform a blind quantum computation. By incorporating the authentication protocol, we show that any problem in BQP has an entangled two-prover interactive proof with a purely classical verifier. Our protocol is the first universal scheme which detects a cheating server, as well as the first protocol which does not require any quantum computation whatsoever on the client's side. The novelty of our approach is in using the unique features of measurement-based quantum computing which allows us to clearly distinguish between the quantum and classical aspects of a quantum computation.
Ignoring the blindness requirement of our protocol yields an interactive proof with a @math prover and a nearly-classical verifier. As mentioned, this scenario was first proposed in the work of @cite_23 , using very different techniques based on authentication schemes. Their protocol can be also used for blind quantum computation. However, their scheme requires that Alice have quantum computational resources and memory to act on a constant-sized register. A related classical protocol for the scenario involving a @math prover and a nearly-linear time verifier was given in @cite_12 .
{ "cite_N": [ "@cite_12", "@cite_23" ], "mid": [ "2146099890", "2952413488" ], "abstract": [ "In this work we study interactive proofs for tractable languages. The (honest) prover should be efficient and run in polynomial time, or in other words a \"muggle\". The verifier should be super-efficient and run in nearly-linear time. These proof systems can be used for delegating computation: a server can run a computation for a client and interactively prove the correctness of the result. The client can verify the result's correctness in nearly-linear time (instead of running the entire computation itself). Previously, related questions were considered in the Holographic Proof setting by Babai, Fortnow, Levin and Szegedy, in the argument setting under computational assumptions by Kilian, and in the random oracle model by Micali. Our focus, however, is on the original interactive proof model where no assumptions are made on the computational power or adaptiveness of dishonest provers. Our main technical theorem gives a public coin interactive proof for any language computable by a log-space uniform boolean circuit with depth d and input length n. The verifier runs in time (n+d) • polylog(n) and space O(log(n)), the communication complexity is d • polylog(n), and the prover runs in time poly(n). In particular, for languages computable by log-space uniform NC (circuits of polylog(n) depth), the prover is efficient, the verifier runs in time n • polylog(n) and space O(log(n)), and the communication complexity is polylog(n). Using this theorem we make progress on several questions: We show how to construct short (polylog size) computationally sound non-interactive certificates of correctness for any log-space uniform NC computation, in the public-key model. The certificates can be verified in quasi-linear time and are for a designated verifier: each certificate is tailored to the verifier's public key. This result uses a recent transformation of Kalai and Raz from public-coin interactive proofs to one-round arguments. The soundness of the certificates is based on the existence of a PIR scheme with polylog communication. Interactive proofs with public-coin, log-space, poly-time verifiers for all of P. This settles an open question regarding the expressive power of proof systems with such verifiers. Zero-knowledge interactive proofs with communication complexity that is quasi-linear in the witness, length for any NP language verifiable in NC, based on the existence of one-way functions. Probabilistically checkable arguments (a model due to Kalai and Raz) of size polynomial in the witness length (rather than the instance length) for any NP language verifiable in NC, under computational assumptions.", "The widely held belief that BQP strictly contains BPP raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that these systems perform as they should, if we cannot efficiently compute predictions for their behavior? Vazirani has asked: If predicting Quantum Mechanical systems requires exponential resources, is QM a falsifiable theory? In cryptographic settings, an untrusted future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions? To answer these questions, we define Quantum Prover Interactive Proofs (QPIP). Whereas in standard Interactive Proofs the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computational capabilities: it is a BPP machine, with access to few qubits. Our main theorem can be roughly stated as: \"Any language in BQP has a QPIP, and moreover, a fault tolerant one\". We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (QAS) based on random Clifford elements. This QPIP however, is not fault tolerant. Our second protocol uses polynomial codes QAS due to BCGHS, combined with quantum fault tolerance and multiparty quantum computation techniques. A slight modification of our constructions makes the protocol \"blind\": the quantum computation and input are unknown to the prover. After we have derived the results, we have learned that Broadbent at al. have independently derived \"universal blind quantum computation\" using completely different methods. Their construction implicitly implies similar implications." ] }
0807.4154
2117593045
We present a protocol which allows a client to have a server carry out a quantum computation for her such that the client's inputs, outputs and computation remain perfectly private, and where she does not require any quantum computational power or memory. The client only needs to be able to prepare single qubits randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Our protocol is interactive: after the initial preparation of quantum states, the client and server use two-way classical communication which enables the client to drive the computation, giving single-qubit measurement instructions to the server, depending on previous measurement outcomes. Our protocol works for inputs and outputs that are either classical or quantum. We give an authentication protocol that allows the client to detect an interfering server; our scheme can also be made fault-tolerant. We also generalize our result to the setting of a purely classical client who communicates classically with two non-communicating entangled servers, in order to perform a blind quantum computation. By incorporating the authentication protocol, we show that any problem in BQP has an entangled two-prover interactive proof with a purely classical verifier. Our protocol is the first universal scheme which detects a cheating server, as well as the first protocol which does not require any quantum computation whatsoever on the client's side. The novelty of our approach is in using the unique features of measurement-based quantum computing which allows us to clearly distinguish between the quantum and classical aspects of a quantum computation.
Returning to the cryptographic scenario, still in the model where the function is classical and public, Arrighi and Salvail @cite_4 gave an approach using quantum resources. The idea of their protocol is that Alice gives Bob multiple quantum inputs, most of which are . Bob applies the target function on all inputs, and then Alice verifies his behaviour on the decoys. There are two important points to make here. First, the protocol only works for a restricted set of classical functions called : it must be possible for Alice to efficiently generate random input-output pairs. Second, the protocol does not prevent Bob from learning Alice's private input; it provides only .
{ "cite_N": [ "@cite_4" ], "mid": [ "1966842050" ], "abstract": [ "We investigate the possibility of having someone carry out the work of executing a function for you, but without letting him learn anything about your input. Say Alice wants Bob to compute some known function f upon her input x, but wants to prevent Bob from learning anything about x. The situation arises for instance if client Alice has limited computational resources in comparison with mistrusted server Bob, or if x is an inherently mobile piece of data. Could there be a protocol whereby Bob is forced to compute ,f(x)blindly, i.e. without observing x? We provide such a blind computation protocol for the class of functions which admit an efficient procedure to generate random input–output pairs, e.g. factorization. The cheat-sensitive security achieved relies only upon quantum theory being true. The security analysis carried out assumes the eavesdropper performs individual attacks." ] }
0807.4154
2117593045
We present a protocol which allows a client to have a server carry out a quantum computation for her such that the client's inputs, outputs and computation remain perfectly private, and where she does not require any quantum computational power or memory. The client only needs to be able to prepare single qubits randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Our protocol is interactive: after the initial preparation of quantum states, the client and server use two-way classical communication which enables the client to drive the computation, giving single-qubit measurement instructions to the server, depending on previous measurement outcomes. Our protocol works for inputs and outputs that are either classical or quantum. We give an authentication protocol that allows the client to detect an interfering server; our scheme can also be made fault-tolerant. We also generalize our result to the setting of a purely classical client who communicates classically with two non-communicating entangled servers, in order to perform a blind quantum computation. By incorporating the authentication protocol, we show that any problem in BQP has an entangled two-prover interactive proof with a purely classical verifier. Our protocol is the first universal scheme which detects a cheating server, as well as the first protocol which does not require any quantum computation whatsoever on the client's side. The novelty of our approach is in using the unique features of measurement-based quantum computing which allows us to clearly distinguish between the quantum and classical aspects of a quantum computation.
The case of a blind computation was first considered by Childs @cite_14 based on the idea of encrypting input qubits with a quantum one-time pad @cite_26 @cite_25 . At each step, Alice sends the encrypted qubits to Bob, who applies a known quantum gate (some gates requiring further interaction with Alice). Bob returns the quantum state, which Alice decrypts using her key. Cycling through a fixed set of universal gates ensures that Bob learns nothing about the circuit. The protocol requires fault-tolerant quantum memory and the ability to apply local Pauli operators at each step, and does not provide any method for the detection of malicious errors.
{ "cite_N": [ "@cite_14", "@cite_25", "@cite_26" ], "mid": [ "1641951603", "2043414726", "2098611377" ], "abstract": [ "Suppose Alice wants to perform some computation that could be done quickly on a quantum computer, but she cannot do universal quantum computation. Bob can do universal quantum computation and claims he is willing to help, but Alice wants to be sure that Bob cannot learn her input, the result of her calculation, or perhaps even the function she is trying to compute. We describe a simple, efficient protocol by which Bob can help Alice perform the computation, but there is no way for him to learn anything about it. We also discuss techniques for Alice to detect whether Bob is honestly helping her or if he is introducing errors.", "We show that 2n random classical bits are both necessary and sufficient for encrypting any unknown state of n quantum bits in an informationally secure manner. We also characterize the complete set of optimal protocols in terms of a set of unitary operations that comprise an orthonormal basis in a canonical inner product space. Moreover, a connection is made between quantum encryption and quantum teleportation that allows for a different proof of optimality of teleportation.", "We investigate how a classical private key can be used by two players, connected by an insecure one-way quantum channel, to perform private communication of quantum information. In particular, we show that in order to transmit n qubits privately, 2n bits of shared private key are necessary and sufficient. This result may be viewed as the quantum analogue of the classical one-time pad encryption scheme." ] }
0807.1734
1683846638
The Dynamic Time Warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB_Keogh). We compare LB_Keogh with a tighter lower bound (LB_Improved). We find that LB_Improved-based search is faster for sequential search. As an example, our approach is 3 times faster over random-walk and shape time series. We also review some of the mathematical properties of the DTW. We derive a tight triangle inequality for the DTW. We show that the DTW becomes the l_1 distance when time series are separated by a constant.
Beside DTW, several similarity metrics have been proposed including the directed and general Hausdorff distance, Pearson's correlation, nonlinear elastic matching distance @cite_17 , Edit distance with Real Penalty (ERP) @cite_50 , Needleman-Wunsch similarity @cite_28 , Smith-Waterman similarity @cite_19 , and SimilB @cite_25 .
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_50", "@cite_25", "@cite_17" ], "mid": [ "2074231493", "2087064593", "113257341", "1993705020", "2113530345" ], "abstract": [ "A computer adaptable method for finding similarities in the amino acid sequences of two proteins has been developed. From these findings it is possible to determine whether significant homology exists between the proteins. This information is used to trace their possible evolutionary development. The maximum match is a number dependent upon the similarity of the sequences. One of its definitions is the largest number of amino acids of one protein that can be matched with those of a second protein allowing for all possible interruptions in either of the sequences. While the interruptions give rise to a very large number of comparisons, the method efficiently excludes from consideration those comparisons that cannot contribute to the maximum match. Comparisons are made from the smallest unit of significance, a pair of amino acids, one from each protein. All possible pairs are represented by a two-dimensional array, and all possible comparisons are represented by pathways through the array. For this maximum match only certain of the possible pathways must, be evaluated. A numerical value, one in this case, is assigned to every cell in the array representing like amino acids. The maximum match is the largest number that would result from summing the cell values of every", "", "A rolling parallel printer in which a pressure element is driven through a swiveling motion each printing cycle and a pressure segment thereof rolls off a line of type. The pressure element is connected to a mechanical linkage which minimizes the sweep of travel of the pressure element, while maintaining the pressure element sufficiently far from the type in a rest position to facilitate reading of the printed matter.", "A new similarity measure, called SimilB, for time series analysis, based on the cross-ΨB-energy operator (2004), is introduced. ΨB is a nonlinear measure which quantifies the interaction between two time series. Compared to Euclidean distance (ED) or the Pearson correlation coefficient (CC), SimilB includes the temporal information and relative changes of the time series using the first and second derivatives of the time series. SimilB is well suited for both nonstationary and stationary time series and particularly those presenting discontinuities. Some new properties of ΨB are presented. Particularly, we show that ΨB as similarity measure is robust to both scale and time shift. SimilB is illustrated with synthetic time series and an artificial dataset and compared to the CC and the ED measures.", "Shape matching is an important ingredient in shape retrieval, recognition and classification, alignment and registration, and approximation and simplification. This paper treats various aspects that are needed to solve shape matching problems: choosing the precise problem, selecting the properties of the similarity measure that are needed for the problem, choosing the specific similarity measure, and constructing the algorithm to compute the similarity. The focus is on methods that lie close to the field of computational geometry." ] }
0807.1734
1683846638
The Dynamic Time Warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB_Keogh). We compare LB_Keogh with a tighter lower bound (LB_Improved). We find that LB_Improved-based search is faster for sequential search. As an example, our approach is 3 times faster over random-walk and shape time series. We also review some of the mathematical properties of the DTW. We derive a tight triangle inequality for the DTW. We show that the DTW becomes the l_1 distance when time series are separated by a constant.
Dimensionality reduction, such as piecewise constant @cite_31 or piecewise linear @cite_8 @cite_1 @cite_4 segmentation, can speed up retrieval under DTW distance. These techniques can be coupled with other optimization techniques @cite_32 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_1", "@cite_32", "@cite_31" ], "mid": [ "2182136398", "2157413692", "2086784973", "1968010112", "2091921805" ], "abstract": [ "Similarity measure between time series is a key issue in data mining of time series database. Euclidean distance measure is typically used init. However, the measure is an extremely brittle distance measure. Dynamic Time Warping (DTW) is proposed to deal with this case, but its expensive computation limits its application in massive datasets. In this paper, we present a new distance measure algorithm, called local segmented dynamic time warping (LSDTW), which is based on viewing the local DTW measure at the segment level. The DTW measure between the two segments is the product of the square of the distance between their mean times the number of points of the longer segment. Experiments about cluster analysis on the basis of this algorithm were implemented on a synthetic and a real world dataset comparing with Euclidean and classical DTW measure. The experiment results show that the new algorithm gives better computational performance in comparison to classical DTW with no loss of accuracy.", "Comparison of time series is a key issue in data mining of time series database. Variation or extension of Euclidean distance is generally used. However Euclidean distance will vary much when time series is to be stretched or compressed along the time-axis. Dynamic time warping distance has been proposed to deal with this case, but its expensive computation limits its application. In this paper, a novel distance based on a new linear segmentation method of time series is proposed to avoid such drawbacks. Experiment results in this paper show that the proposed method achieves significant speed up to about 20 times than dynamic time warping distance without accuracy decrease.", "Similarity search is a core module of many data analysis tasks, including search by example, classification, and clustering. For time series data, Dynamic Time Warping (DTW) has been proven a very effective similarity measure, since it minimizes the effects of shifting and distortion in time. However, the quadratic cost of DTW computation to the length of the matched sequences makes its direct application on databases of long time series very expensive. We propose a technique that decomposes the sequences into a number of segments and uses cheap approximations thereof to compute fast lower bounds for their warping distances. We present several, progressively tighter bounds, relying on the existence or not of warping constraints. Finally, we develop an index and a multi-step technique that uses the proposed bounds and performs two levels of filtering to efficiently process similarity queries. A thorough experimental study suggests that our method consistently outperforms state-of-the-art methods for DTW similarity search.", "Time-series data naturally arise in countless domains, such as meteorology, astrophysics, geology, multimedia, and economics. Similarity search is very popular, and DTW (Dynamic Time Warping) is one of the two prevailing distance measures. Although DTW incurs a heavy computation cost, it provides scaling along the time axis. In this paper, we propose FTW (Fast search method for dynamic Time Warping), which guarantees no false dismissals in similarity query processing. FTW efficiently prunes a significant number of the search cost. Experiments on real and synthetic sequence data sets reveals that FTW is significantly faster than the best existing method, up to 222 times.", "The problem of indexing time series has attracted much interest. Most algorithms used to index time series utilize the Euclidean distance or some variation thereof. However, it has been forcefully shown that the Euclidean distance is a very brittle distance measure. Dynamic time warping (DTW) is a much more robust distance measure for time series, allowing similar shapes to match even if they are out of phase in the time axis. Because of this flexibility, DTW is widely used in science, medicine, industry and finance. Unfortunately, however, DTW does not obey the triangular inequality and thus has resisted attempts at exact indexing. Instead, many researchers have introduced approximate indexing techniques or abandoned the idea of indexing and concentrated on speeding up sequential searches. In this work, we introduce a novel technique for the exact indexing of DTW. We prove that our method guarantees no false dismissals and we demonstrate its vast superiority over all competing approaches in the largest and most comprehensive set of time series indexing experiments ever undertaken." ] }
0807.1734
1683846638
The Dynamic Time Warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB_Keogh). We compare LB_Keogh with a tighter lower bound (LB_Improved). We find that LB_Improved-based search is faster for sequential search. As an example, our approach is 3 times faster over random-walk and shape time series. We also review some of the mathematical properties of the DTW. We derive a tight triangle inequality for the DTW. We show that the DTW becomes the l_1 distance when time series are separated by a constant.
The performance of lower bounds can be further improved if one uses early abandoning @cite_29 to cancel the computation of the lower bound as soon as the error is too large. Boundary-based lower-bound functions sometimes outperform LB @cite_9 . Zhu and Shasha showed that computing a warping envelope prior to applying dimensionality reduction results in a tighter lower bound @cite_10 . We can also quantize @cite_40 or cluster @cite_11 the time series.
{ "cite_N": [ "@cite_9", "@cite_29", "@cite_40", "@cite_10", "@cite_11" ], "mid": [ "2144117844", "2159138228", "1542908876", "2066834853", "2112791883" ], "abstract": [ "Lower-bound functions are crucial for indexing time-series data under dynamic time warping (DTW) distance. In this paper, we propose a unified framework to explain the existing lower-bound functions. Based on the framework, we further propose a group of lower-bound functions for DTW and investigate their performances through extensive experiments. Experimental results show that the new lower-bound functions are better than the existing one in most cases. An index structure based on the new lower-bound functions is also implemented.", "In many applications, it is desirable to monitor a streaming time series for predefined patterns. In domains as diverse as the monitoring of space telemetry, patient intensive care data, and insect populations, where data streams at a high rate and the number of predefined patterns is large, it may be impossible for the comparison algorithm to keep up. We propose a novel technique that exploits the commonality among the predefined patterns to allow monitoring at higher bandwidths, while maintaining a guarantee of no false dismissals. Our approach is based on the widely used envelope-based lower bounding technique. Extensive experiments demonstrate that our approach achieves tremendous improvements in performance in the offline case, and significant improvements in the fastest possible arrival rate of the data stream that can be processed with guaranteed no false dismissal.", "Indexing Time Series Data is an interesting problem that has attracted much interest in the research community for the last decade. Traditional indexing methods organize the data space using different metrics. For time series, however, there are some cases when a metric is not suited for properly assessing the similarity between sequences. For instance, to detect similarities between sequences that are locally out of phase Dynamic Time Warping (DTW) must be used. DTW is not a metric as it does not satisfy the triangular inequality. Therefore, traditional spatial access methods cannot be used without introducing false dismissals. In such cases, alternative methods for organizing and searching time series data must be proposed. In this paper we propose the use of quantization to generate small and homogeneous representations of time series. We compute upper- and lower-bounds on the DTW distance to a query sequence using this quantized representation to filter-out sequences that cannot be a best match for the query. In the proposed approach, efficient search is achieved by organizing the quantized representation of data in a linear array that can be efficiently read from disk. The computational cost of processing the query is shadowed by the IO cost required to scan the file containing the linear array and it does affect the total query cost.", "A Query by Humming system allows the user to find a song by humming part of the tune. No musical training is needed. Previous query by humming systems have not provided satisfactory results for various reasons. Some systems have low retrieval precision because they rely on melodic contour information from the hum tune, which in turn relies on the error-prone note segmentation process. Some systems yield better precision when matching the melody directly from audio, but they are slow because of their extensive use of Dynamic Time Warping (DTW). Our approach improves both the retrieval precision and speed compared to previous approaches. We treat music as a time series and exploit and improve well-developed techniques from time series databases to index the music for fast similarity queries. We improve on existing DTW indexes technique by introducing the concept of envelope transforms, which gives a general guideline for extending existing dimensionality reduction methods to DTW indexes. The net result is high scalability. We confirm our claims through extensive experiments.", "The matching of two-dimensional shapes is an important problem with applications in domains as diverse as biometrics, industry, medicine and anthropology. The distance measure used must be invariant to many distortions, including scale, offset, noise, partial occlusion, etc. Most of these distortions are relatively easy to handle, either in the representation of the data or in the similarity measure used. However rotation invariance seems to be uniquely difficult. Current approaches typically try to achieve rotation invariance in the representation of the data, at the expense of discrimination ability, or in the distance measure, at the expense of efficiency. In this work we show that we can take the slow but accurate approaches and dramatically speed them up. On real world problems our technique can take current approaches and make them four orders of magnitude faster, without false dismissals. Moreover, our technique can be used with any of the dozens of existing shape representations and with all the most popular distance measures including Euclidean distance, Dynamic Time Warping and Longest Common Subsequence." ] }
0807.0257
2953153251
This paper deals with efficient numerical representation and manipulation of differential and integral operators as symbols in phase-space, i.e., functions of space @math and frequency @math . The symbol smoothness conditions obeyed by many operators in connection to smooth linear partial differential equations allow to write fast-converging, non-asymptotic expansions in adequate systems of rational Chebyshev functions or hierarchical splines. The classical results of closedness of such symbol classes under multiplication, inversion and taking the square root translate into practical iterative algorithms for realizing these operations directly in the proposed expansions. Because symbol-based numerical methods handle operators and not functions, their complexity depends on the desired resolution @math very weakly, typically only through @math factors. We present three applications to computational problems related to wave propagation: 1) preconditioning the Helmholtz equation, 2) decomposing wavefields into one-way components and 3) depth-stepping in reflection seismology.
The idea of writing pseudodifferential symbols in separated form to formulate various one-way approximations to the variable-coefficients Helmholtz equation has long been a tradition in seismic imaging. This almost invariably involves a high-frequency approximation of some kind. Some influential work includes the phase screen method by Fisk and McCartor @cite_34 , and the generalized screen expansion of Le Rousseau and de Hoop @cite_8 . This last reference discusses fast application of pseudodifferential operators in separated form using the FFT, and it is likely not the only reference to make this simple observation. A modern treatment of leading-order pseudodifferential approximations to one-way wave equations is in @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_34", "@cite_8" ], "mid": [ "", "2128182271", "2114101044" ], "abstract": [ "", "A phase screen method for vector elastic waves is developed. The method allows rapid calculation of the propagation of elastic waves for problems where backscatter is small; only forward propagation is included in the analysis. The method can be used for propagation through either deterministic or stochastic media. The work described here generalizes existing phase screen methods for scalar waves, accounting for the difference in phase velocities of the transverse and longitudinal polarizations. The method is tested on a two-dimensional problem, whose exact solution is also computed, for constant Poisson ratios. Synthetic seismograms for two-dimensional random media using this method are compared with existing finite difference synthetics. An important use of the method is to estimate the rate of interconversion of S and P waves in terms of the energy flux to analyze the effectiveness of potential discrimination techniques.", "We describe the propagation and scattering of elastic waves in heterogeneous media. Decomposing the elastic wavefield into up- and down-going constituents allows the introduction of the 'one-way' wave equations and propagators. Such propagators account for transverse scattering and mode coupling. The generalized-screen expansion of the symbol of the one-way wave equation in medium contrast and medium smoothness induces an approximation of the propagator with an associated computational complexity of the one of the phase screen approximation. The generalized-screen expansion extends the phase-screen approach. It allows for larger medium fluctuations and wider-angle propagation. We illustrate the accuracy of the generalized screen with numerical examples." ] }
0807.0257
2953153251
This paper deals with efficient numerical representation and manipulation of differential and integral operators as symbols in phase-space, i.e., functions of space @math and frequency @math . The symbol smoothness conditions obeyed by many operators in connection to smooth linear partial differential equations allow to write fast-converging, non-asymptotic expansions in adequate systems of rational Chebyshev functions or hierarchical splines. The classical results of closedness of such symbol classes under multiplication, inversion and taking the square root translate into practical iterative algorithms for realizing these operations directly in the proposed expansions. Because symbol-based numerical methods handle operators and not functions, their complexity depends on the desired resolution @math very weakly, typically only through @math factors. We present three applications to computational problems related to wave propagation: 1) preconditioning the Helmholtz equation, 2) decomposing wavefields into one-way components and 3) depth-stepping in reflection seismology.
A different, competing approach to compressing operators is the partitioned separated'' method that consists in isolating off-diagonal squares of the kernel @math , and approximating each of them by a low-rank matrix. This also calls for an adapted notion of calculus, e.g., for composing and inverting operators. The first reference to this algorithmic framework is probably the partitioned SVD method described in @cite_20 . More recently, these ideas have been extensively developed under the name H-matrix, for hierarchical matrix; see @cite_29 @cite_4 and http: www.hlib.org.
{ "cite_N": [ "@cite_29", "@cite_4", "@cite_20" ], "mid": [ "", "2018419001", "2034318967" ], "abstract": [ "", "A class of matrices ( ( H )-matrices) is introduced which have the following properties. (i) They are sparse in the sense that only few data are needed for their representation. (ii) The matrix-vector multiplication is of almost linear complexity. (iii) In general, sums and products of these matrices are no longer in the same set, but their truncations to the ( H )-matrix format are again of almost linear complexity. (iv) The same statement holds for the inverse of an ( H )-matrix.", "An algorithm is presented for the rapid direct solution of the Laplace equation on regions with fractal boundaries. In a typical application, the numerical simulation has to be on a very large scale involving at least tens of thousands of equations with as many unknowns, in order to obtain any meaningful results. Attempts to use conventional techniques have encountered insurmountable difficulties, due to excessive CPU time requirements of the computations involved. Indeed, conventional direct algorithms for the solution of linear systems require order O(N3) operations for the solution of an N � N-problem, while classical iterative methods require order O(N2) operations, with the constant strongly dependent on the problem in question. In either case, the computational expense is prohibitive for large-scale problems. The direct algorithm of the present paper requires O(N) operations with a constant dependent only on the geometry of the boundary, making it considerably more practical for large-scale problems encountered in the computation of harmonic measure of fractals, complex iteration theory, potential theory, and growth phenomena such as crystallization, electrodeposition, viscous fingering, and diffusion-limited aggregation." ] }
0807.0432
2952294017
This work deals with the numerical solution of the monodomain and bidomain models of electrical activity of myocardial tissue. The bidomain model is a system consisting of a possibly degenerate parabolic PDE coupled with an elliptic PDE for the transmembrane and extracellular potentials, respectively. This system of two scalar PDEs is supplemented by a time-dependent ODE modeling the evolution of the so-called gating variable. In the simpler sub-case of the monodomain model, the elliptic PDE reduces to an algebraic equation. Two simple models for the membrane and ionic currents are considered, the Mitchell-Schaeffer model and the simpler FitzHugh-Nagumo model. Since typical solutions of the bidomain and monodomain models exhibit wavefronts with steep gradients, we propose a finite volume scheme enriched by a fully adaptive multiresolution method, whose basic purpose is to concentrate computational effort on zones of strong variation of the solution. Time adaptivity is achieved by two alternative devices, namely locally varying time stepping and a Runge-Kutta-Fehlberg-type adaptive time integration. A series of numerical examples demonstrates thatthese methods are efficient and sufficiently accurate to simulate the electrical activity in myocardial tissue with affordable effort. In addition, an optimalthreshold for discarding non-significant information in the multiresolution representation of the solution is derived, and the numerical efficiency and accuracy of the method is measured in terms of CPU time speed-up, memory compression, and errors in different norms.
MR schemes for hyperbolic partial differential equations were first proposed by Harten @cite_0 . We refer to the work of M "uller @cite_26 for a survey on MR methods, see also @cite_40 . As stated above, the idea behind the MR method is to accelerate a reference discretization scheme while controlling the error. In the context of fully adaptive MR methods @cite_21 , the mathematical analysis is complete only in the case of a scalar conservation law, but in practice, these techniques have been used by several groups (see e.g. @cite_30 @cite_10 @cite_26 @cite_3 @cite_6 ) to successfully solve a wide class of problems, including applications to multidimensional systems. For more details on the framework of classical MR methods for hyperbolic partial differential equations, we also refer to @cite_21 and @cite_35 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_26", "@cite_21", "@cite_3", "@cite_6", "@cite_0", "@cite_40", "@cite_10" ], "mid": [ "", "1980836430", "579827450", "", "2040476229", "2077468478", "2163637155", "", "2029859487" ], "abstract": [ "", "In recent years a variety of high–order schemes for the numerical solution of conservation laws has been developed. In general, these numerical methods involve expensive flux evaluations in order to resolve discontinuities accurately. But in large parts of the flow domain the solution is smooth. Hence in these regions an unexpensive finite difference scheme suffices. In order to reduce the number of expensive flux evaluations we employ a multiresolution strategy which is similar in spirit to an approach that has been proposed by A. Harten several years ago. Concrete ingredients of this methodology have been described so far essentially for problems in a single space dimension. In order to realize such concepts for problems with several spatial dimensions and boundary fitted meshes essential deviations from previous investigations appear to be necessary though. This concerns handling the more complex interrelations of fluxes across cell interfaces, the derivation of appropriate evolution equations for multiscale representations of cell averages, stability and convergence, quantifying the compression effects by suitable adapted multiscale transformations and last but not least laying grounds for ultimately avoiding the storage of data corresponding to a full global mesh for the highest level of resolution. The objective of this paper is to develop such ingredients for any spatial dimension and block structured meshes obtained as parametric images of Cartesian grids. We conclude with some numerical results for the two–dimensional Euler equations modeling hypersonic flow around a blunt body.", "1 Model Problem and Its Discretization.- 1.1 Conservation Laws.- 1.2 Finite Volume Methods.- 2 Multiscale Setting.- 2.1 Hierarchy of Meshes.- 2.2 Motivation.- 2.3 Box Wavelet.- 2.3.1 Box Wavelet on a Cartesian Grid Hierarchy.- 2.3.2 Box Wavelet on an Arbitrary Nested Grid Hierarchy.- 2.4 Change of Stable Completion.- 2.5 Box Wavelet with Higher Vanishing Moments.- 2.5.1 Definition and Construction.- 2.5.2 A Univariate Example.- 2.5.3 A Remark on Compression Rates.- 2.6 Multiscale Transformation.- 3 Locally Refined Spaces.- 3.1 Adaptive Grid and Significant Details.- 3.2 Grading.- 3.3 Local Multiscale Transformation.- 3.4 Grading Parameter.- 3.5 Locally Uniform Grids.- 3.6 Algorithms: Encoding, Thresholding, Grading, Decoding.- 3.7 Conservation Property.- 3.8 Application to Curvilinear Grids.- 4 Adaptive Finite Volume Scheme.- 4.1 Construction.- 4.1.1 Strategies for Local Flux Evaluation.- 4.1.2 Strategies for Prediction of Details.- 4.2 A gorithms: Initial data, Prediction, Fluxes and Evolution.- 5 Error Analysis.- 5.1 Perturbation Error.- 5.2 Stability of Approximation.- 5.3 Reliability of Prediction.- 6 Data Structures and Memory Management.- 6.1 Algorithmic Requirements and Design Criteria.- 6.2 Hashing.- 6.3 Data Structures.- 7 Numerical Experiments.- 7.1 Parameter Studies.- 7.1.1 Test Configurations.- 7.1.2 Discretization.- 7.1.3 Computational Complexity and Stability.- 7.1.4 Hash Parameters.- 7.2 Real World Application.- 7.2.1 Configurations.- 7.2.2 Discretization.- 7.2.3 Discussion of Results.- A Plots of Numerical Experiments.- B The Context of Biorthogonal Wavelets.- B.1 General Setting.- B.1.1 Multiscale Basis.- B.1.2 Stable Completion.- B.1.3 Multiscale Transformation.- B.2 Biorthogonal Wavelets of the Box Function.- B.2.1 Haar Wavelets.- B.2.2 Biorthogonal Wavelets on the Real Line.- References.- List of Figures.- List of Tables.- Notation.", "", "In recent years the concept of fully adaptive multiscale finite volume schemes for conservation laws has been developed and analytically investigated. Here the grid adaptation is performed by means of a multiscale analysis. So far, all cells are evolved in time using the same time step size. In the present work this concept is extended incorporating locally varying time stepping. A general strategy is presented for explicit as well as implicit time discretization. The efficiency and the accuracy of the proposed concept is verified numerically.", "We present a new adaptive numerical scheme for solving parabolic PDEs in Cartesian geometry. Applying a finite volume discretization with explicit time integration, both of second order, we employ a fully adaptive multiresolution scheme to represent the solution on locally refined nested grids. The fluxes are evaluated on the adaptive grid. A dynamical adaption strategy to advance the grid in time and to follow the time evolution of the solution directly exploits the multiresolution representation. Applying this new method to several test problems in one, two and three space dimensions, like convection-diffusion, viscous Burgers and reaction-diffusion equations, we show its second-order accuracy and demonstrate its computational efficiency.", "Given any scheme in conservation form and an appropriate uniform grid for the numerical solution of the initial value problem for one-dimensional hyperbolic conservation laws we describe a multiresolution algorithm that approximates this numerical solution to a prescribed tolerance in an efficient manner. To do so we consider the grid-averages of the numerical solution for a hierarchy of nested diadic grids in which the given grid is the finest, and introduce an equivalent multiresolution representation. The multiresolution representation of the numerical solution consists of its grid-averages for the coarsest grid and the set of errors in predicting the grid-averages of each level of resolution in this hierarchy from those of the next coarser one. Once the numerical solution is resolved to our satisfaction in a certain locality of some grid, then the prediction errors there are small for this particular grid and all finer ones; this enables us to compress data by setting to zero small components of the representation which fall below a prescribed tolerance. Therefore instead of computing the time-evolution of the numerical solution on the given grid we compute the time-evolution of its compressed multiresolution representation. Algorithmically this amounts to computing the numerical fluxes of the given scheme at the points of the given grid by a hierarchical algorithm which starts with the computation of these numerical fluxes at the points of the coarsest grid and then proceeds through diadic refinements to the given grid. At each step of refinement we add the values of the numerical flux at the center of the coarser cells. The information in the multiresolution representation of the numerical solution is used to determine whether the solution is locally well-resolved. When this is the case we replace the costly exact value of the numerical flux with an accurate enough approximate value which is obtained by an inexpensive interpolation from the coarser grid. The computational efficiency of this multiresolution algorithm is proportional to the rate of data compression (for a prescribed level of tolerance) that can be achieved for the numerical solution of the given scheme.", "", "We present a fully adaptive numerical scheme for evolutionary PDEs in Cartesian geometry based on a second-order finite volume discretization. A multiresolution strategy allows local grid refinement while controlling the approximation error in space. For time discretization we use an explicit Runge-Kutta scheme of second-order with a scale-dependent time step. On the finest scale the size of the time step is imposed by the stability condition of the explicit scheme. On larger scales, the time step can be increased without violating the stability requirement of the explicit scheme. The implementation uses a dynamic tree data structure. Numerical validations for test problems in one space dimension demonstrate the efficiency and accuracy of the local time-stepping scheme with respect to both multiresolution scheme with global time stepping and finite volume scheme on a regular grid. Fully adaptive three-dimensional computations for reaction-diffusion equations illustrate the additional speed-up of the local time stepping for a thermo-diffusive flame instability." ] }
0807.0993
1504298150
With the advent of increasingly complex hardware in real-time embedded systems (processors with performance enhancing features such as pipelines, cache hierarchy, multiple cores), many processors now have a set-associative L2 cache. Thus, there is a need for considering cache hierarchies when validating the temporal behavior of real-time systems, in particular when estimating tasks' worst-case execution times (WCETs). To the best of our knowledge, there is only one approach for WCET estimation for systems with cache hierarchies [Mueller, 1997], which turns out to be unsafe for set-associative caches. In this paper, we highlight the conditions under which the approach described in [Mueller, 1997] is unsafe. A safe static instruction cache analysis method is then presented. Contrary to [Mueller, 1997] our method supports set-associative and fully associative caches. The proposed method is experimented on medium-size and large programs. We show that the method is most of the time tight. We further show that in all cases WCET estimations are much tighter when considering the cache hierarchy than when considering only the L1 cache. An evaluation of the analysis time is conducted, demonstrating that analysing the cache hierarchy has a reasonable computation time.
To be safe, existing static cache analysis methods determine possible cache contents at every point in the execution, considering all execution paths altogether. Possible cache contents can be represented as sets of @cite_19 or by a more compact representation called @cite_9 @cite_4 @cite_10 @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_19", "@cite_10" ], "mid": [ "2098030406", "1507495123", "1595437973", "2136910062", "93622544" ], "abstract": [ "This paper contributes a comprehensive study of a framework to bound worst-case instruction cache performance for caches with arbitrary levels of associativity. The framework is formally introduced, operationally described and its correctness is shown. Results of incorporating instruction cache predictions within pipeline simulation show that timing predictions for set-associative caches remain just as tight as predictions for direct-mapped caches. The low cache simulation overhead allows interactive use of the analysis tool and scales well with increasing associativity. The approach taken is based on a data-flow specification of the problem and provides another step toward worst-case execution time prediction of contemporary architectures and its use in schedulability analysis for hard real-time systems.", "The USES-groupat the Universitat des Saarlandes follows an approach to compute reliable run-time guarantees which is both wellbased on theoretical foundations and practical from a software engineering and an efficiency point of view. Several aspects are essential to the USES approach: the resulting system is modular by structuring the task into a sequence of subtasks, which are tackled with appropriate methods. Generic and generative methods are used whenever possible. These principles lead to an understandable, maintainable, efficient, and provably correct system. This paper gives an overview of the methods used in the USES approach to WCET determination. A fully functional prototype system for the Motorola ColdFire MCF 5307 processor is presented, the implications of processor design on the predictability of behavior described, and experiences with analyzing applications running on this processor reported.", "Precise run-time prediction suffers from a complexity problem when doing an integrated analysis. This problem is characterised by the conflict between an optimal solution and the complexity of the computation of the solution. The analysis of modern hardware consists of two parts: a) the analysis of the microarchitecture‘s behaviour (caches, pipelines) and b) the search for the longest program path. Because an integrated analysis has a significant computational complexity, we chose to separate these two steps. By this, an ordering problem arises, because the steps depend on each other. In this paper we show how the microarchitecture analysis can be separated from the path analysis in order to make the overall analysis fast. Practical experiments will show that this separation, however, does not make the analysis more pessimistic than existing approaches. Furthermore, we show that the approach can be used to analyse executables created by a standard optimising compiler.", "Multitasked real-time systems often employ caches to boost performance. However the unpredictable dynamic behavior of caches makes schedulability analysis of such systems difficult. In particular, the effect of caches needs to be considered for estimating the inter-task interference. As the memory blocks of different tasks can map to the same cache blocks, preemption of a task may introduce additional cache misses. The time penalty introduced by these misses is called the cache-related preemption delay (CRPD). In this paper, we provide a program path analysis technique to estimate CRPD. Our technique performs path analysis of both the preempted and the preempting tasks. Furthermore, we improve the accuracy of the analysis by estimating the possible states of the entire cache at each possible preemption point rather than estimating the states of each cache block independently. To avoid incurring high space requirements, the cache states can be maintained symbolically as a binary decision diagram. Experimental results indicate that we obtain tight CRPD estimates for realistic benchmarks.", "" ] }
0807.0993
1504298150
With the advent of increasingly complex hardware in real-time embedded systems (processors with performance enhancing features such as pipelines, cache hierarchy, multiple cores), many processors now have a set-associative L2 cache. Thus, there is a need for considering cache hierarchies when validating the temporal behavior of real-time systems, in particular when estimating tasks' worst-case execution times (WCETs). To the best of our knowledge, there is only one approach for WCET estimation for systems with cache hierarchies [Mueller, 1997], which turns out to be unsafe for set-associative caches. In this paper, we highlight the conditions under which the approach described in [Mueller, 1997] is unsafe. A safe static instruction cache analysis method is then presented. Contrary to [Mueller, 1997] our method supports set-associative and fully associative caches. The proposed method is experimented on medium-size and large programs. We show that the method is most of the time tight. We further show that in all cases WCET estimations are much tighter when considering the cache hierarchy than when considering only the L1 cache. An evaluation of the analysis time is conducted, demonstrating that analysing the cache hierarchy has a reasonable computation time.
In @cite_9 the approach is based on abstract interpretation @cite_18 @cite_5 and uses ACS. An @math function is defined to represent a memory access to the cache and a @math function is defined to merge two different ACS in case there is an uncertainty on the path to be followed at run-time (e.g. at the end of a conditional construct). In this approach, three different analyses are applied which used fixpoint computation to determine: if a memory block is in the cache ( analysis), if a memory block be present in the cache ( analysis), and if a memory block will not be evicted after it has been first loaded ( analysis). A (e.g. , ) can then be assigned to every instruction based on the results of the three analyses. This approach originally designed for LRU caches has been extended for different cache replacement policies in @cite_1 : Pseudo-LRU, Pseudo-Round-Robin. To our knowledge, this approach has not been extended to analyze multiple levels of caches. Our multi-level cache analysis will be defined as an extension of @cite_9 , mainly because of the theoretical results applicable when using abstract interpretation.
{ "cite_N": [ "@cite_1", "@cite_5", "@cite_9", "@cite_18" ], "mid": [ "2104626020", "", "1595437973", "2043100293" ], "abstract": [ "The architecture of tools for the determination of worst case execution times (WCETs) as well as the precision of the results of WCET analyses strongly depend on the architecture of the employed processor. The cache replacement strategy influences the results of cache behavior prediction; out-of-order execution and control speculation introduce interferences between processor components, e.g., caches, pipelines, and branch prediction units. These interferences forbid modular designs of WCET tools, which would execute the subtasks of WCET analysis consecutively. Instead, complex integrated designs are needed, resulting in high demand for memory space and analysis time. We have implemented WCET tools for a series of increasingly complex processors: SuperSPARC, Motorola ColdFire 5307, and Motorola PowerPC 755. In this paper, we describe the designs of these tools, report our results and the lessons learned, and give some advice as to the predictability of processor architectures.", "", "Precise run-time prediction suffers from a complexity problem when doing an integrated analysis. This problem is characterised by the conflict between an optimal solution and the complexity of the computation of the solution. The analysis of modern hardware consists of two parts: a) the analysis of the microarchitecture‘s behaviour (caches, pipelines) and b) the search for the longest program path. Because an integrated analysis has a significant computational complexity, we chose to separate these two steps. By this, an ordering problem arises, because the steps depend on each other. In this paper we show how the microarchitecture analysis can be separated from the path analysis in order to make the overall analysis fast. Practical experiments will show that this separation, however, does not make the analysis more pessimistic than existing approaches. Furthermore, we show that the approach can be used to analyse executables created by a standard optimising compiler.", "A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe (+), (-), (±) where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …)." ] }
0807.0993
1504298150
With the advent of increasingly complex hardware in real-time embedded systems (processors with performance enhancing features such as pipelines, cache hierarchy, multiple cores), many processors now have a set-associative L2 cache. Thus, there is a need for considering cache hierarchies when validating the temporal behavior of real-time systems, in particular when estimating tasks' worst-case execution times (WCETs). To the best of our knowledge, there is only one approach for WCET estimation for systems with cache hierarchies [Mueller, 1997], which turns out to be unsafe for set-associative caches. In this paper, we highlight the conditions under which the approach described in [Mueller, 1997] is unsafe. A safe static instruction cache analysis method is then presented. Contrary to [Mueller, 1997] our method supports set-associative and fully associative caches. The proposed method is experimented on medium-size and large programs. We show that the method is most of the time tight. We further show that in all cases WCET estimations are much tighter when considering the cache hierarchy than when considering only the L1 cache. An evaluation of the analysis time is conducted, demonstrating that analysing the cache hierarchy has a reasonable computation time.
In @cite_17 @cite_14 , so-called is used to determine every possible content of the cache before each instruction. Static cache simulation computes abstract cache states using dataflow analysis. A (, , and ) is used to classify the worst-case behavior of the cache for a given instruction. The base approach, initially designed for direct-mapped caches, was later extended to set-associative caches @cite_7 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_17" ], "mid": [ "2098030406", "2149052625", "1484206933" ], "abstract": [ "This paper contributes a comprehensive study of a framework to bound worst-case instruction cache performance for caches with arbitrary levels of associativity. The framework is formally introduced, operationally described and its correctness is shown. Results of incorporating instruction cache predictions within pipeline simulation show that timing predictions for set-associative caches remain just as tight as predictions for direct-mapped caches. The low cache simulation overhead allows interactive use of the analysis tool and scales well with increasing associativity. The approach taken is based on a data-flow specification of the problem and provides another step toward worst-case execution time prediction of contemporary architectures and its use in schedulability analysis for hard real-time systems.", "The contributions of this paper are twofold. First, an automatic tool-based approach is described to bound worst-case data cache performance. The given approach works on fully optimized code, performs the analysis over the entire control flow of a program, detects and exploits both spatial and temporal locality within data references, produces results typically within a few seconds, and estimates, on average, 30 tighter WCET bounds than can be predicted without analyzing data cache behavior. Results obtained by running the system on representative programs are presented and indicate that timing analysis of data cache behavior can result in significantly tighter worst-case performance predictions. Second, a framework to bound worst-case instruction cache performance for set-associative caches is formally introduced and operationally described. Results of incorporating instruction cache predictions within pipeline simulation show that timing predictions for set-associative caches remain just as tight as predictions for direct-mapped caches. The cache simulation overhead scales linearly with increasing associativity.", "This work takes a fresh look at the simulation of cache memories. It introduces the technique of static cache simulation that statically predicts a large portion of cache references. To efficiently utilize this technique, a method to perform efficient on-the-fly analysis of programs in general is developed and proved correct. This method is combined with static cache simulation for a number of applications. The application of fast instruction cache analysis provides a new framework to evaluate instruction cache memories that outperforms even the fastest techniques published. Static cache simulation is shown to address the issue of predicting cache behavior, contrary to the belief that cache memories introduce unpredictability to real-time systems that cannot be efficiently analyzed. Static cache simulation for instruction caches provides a large degree of predictability for real-time systems. In addition, an architectural modification through bit-encoding is introduced that provides fully predictable caching behavior. Even for regular instruction caches without architectural modifications, tight bounds for the execution time of real-time programs can be derived from the information provided by the static cache simulator. Finally, the debugging of real-time applications can be enhanced by displaying the timing information of the debugged program at breakpoints. The timing information is determined by simulating the instruction cache behavior during program execution and can be used, for example, to detect missed deadlines and locate time-consuming code portions. Overall, the technique of static cache simulation provides a novel approach to analyze cache memories and has been shown to be very efficient for numerous applications." ] }
0807.1494
2950910462
Algorithm selection is typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. In recent work, we adopted an online approach, in which a performance model is iteratively updated and used to guide selection on a sequence of problem instances. The resulting exploration-exploitation trade-off was represented as a bandit problem with expert advice, using an existing solver for this game, but this required the setting of an arbitrary bound on algorithm runtimes, thus invalidating the optimal regret of the solver. In this paper, we propose a simpler framework for representing algorithm selection as a bandit problem, with partial information, and an unknown bound on losses. We adapt an existing solver to this game, proving a bound on its expected regret, which holds also for the resulting algorithm selection technique. We present preliminary experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark.
A seminal paper in the field of algorithm selection is @cite_15 , in which offline, per instance selection is first proposed, for both decision and optimisation problems. More recently, similar concepts have been proposed, with different terminology (algorithm , , ), in the community @cite_32 @cite_17 @cite_3 . Research in this field usually deals with optimisation problems, and is focused on maximizing solution quality, without taking into account the computational aspect. Work on @cite_13 @cite_20 is instead applied to decision problems, and focuses on obtaining accurate models of runtime performance, conditioned on numerous features of the problem instances, as well as on parameters of the solvers @cite_35 . The models are used to perform algorithm selection on a per instance basis, and are learned offline: online selection is advocated in @cite_35 . Literature on algorithm portfolios @cite_2 @cite_9 @cite_28 is usually focused on choice criteria for building the set of candidate solvers, such that their areas of good performance do not overlap, and optimal static allocation of computational resources among elements of the portfolio.
{ "cite_N": [ "@cite_35", "@cite_28", "@cite_9", "@cite_32", "@cite_3", "@cite_2", "@cite_15", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "164287990", "2408622691", "", "", "1985789779", "2029537801", "1495775210", "1500443963", "1546325545", "2145680191" ], "abstract": [ "Tuning an algorithm’s parameters for robust and high performance is a tedious and time-consuming task that often requires knowledge about both the domain and the algorithm of interest. Furthermore, the optimal parameter configuration to use may differ considerably across problem instances. In this report, we define and tackle the algorithm configuration problem, which is to automatically choose the optimal parameter configuration for a given algorithm on a per-instance base. We employ an indirect approach that predicts algorithm runtime for the problem instance at hand and each (continuous) parameter configuration, and then simply chooses the configuration that minimizes the prediction. This approach is based on similar work by Leyton- [LBNS02, NLBD04] who tackle the algorithm selection problem [Ric76] (given a problem instance, choose the best algorithm to solve it). While all previous studies for runtime prediction focussed on tree search algorithm, we demonstrate that it is possible to fairly accurately predict the runtime of SAPS [HTH02], one of the best-performing stochastic local search algorithms for SAT. We also show that our approach automatically picks parameter configurations that speed up SAPS by an average factor of more than two when compared to its default parameter configuration. Finally, we introduce sequential Bayesian learning to the problem of runtime prediction, enabling an incremental learning approach and yielding very informative estimates of predictive uncertainty.", "We present an approach for improving the performance of combinatorial optimization algorithms by generating an optimal Parallel Portfolio of Algorithms (PPA). A PPA is a collection of diverse algorithms for solving a single problem, all running concurrently on a single processor until a solution is produced. The performance of the portfolio may be controlled by assigning different shares of processor time to each algorithm. We present a method for finding a static PPA, in which the share of processor time allocated to each algorithm is fixed. The schedule is shown to be optimal with respect to a given training set of instances. We draw bounds on the performance of the PPA over random instances and evaluate the performance empirically on a collection of 23 state-of-the-art SAT algorithms. The results show significant performance gains (up to a factor of 2) over the fastest individual algorithm in a realistic setting.", "", "", "Recent advances in meta-learning are providing the foundations to construct meta-learning assistants and task-adaptive learners. The goal of this special issue is to foster an interest in meta-learning by compiling representative work in the field. The contributions to this special issue provide strong insights into the construction of future meta-learning tools. In this introduction we present a common frame of reference to address work in meta-learning through the concept of meta-knowledge. We show how meta-learning can be simply defined as the process of exploiting knowledge about learning that enables us to understand and improve the performance of learning algorithms.", "", "Publisher Summary The problem of selecting an effective algorithm arises in a wide variety of situations. This chapter starts with a discussion on abstract models: the basic model and associated problems, the model with selection based on features, and the model with variable performance criteria. One objective of this chapter is to explore the applicability of the approximation theory to the algorithm selection problem. There is an intimate relationship here and that the approximation theory forms an appropriate base upon which to develop a theory of algorithm selection methods. The approximation theory currently lacks much of the necessary machinery for the algorithm selection problem. There is a need to develop new results and apply known techniques to these new circumstances. The final pages of this chapter form a sort of appendix, which lists 15 specific open problems and questions in this area. There is a close relationship between the algorithm selection problem and the general optimization theory. This is not surprising since the approximation problem is a special form of the optimization problem. Most realistic algorithm selection problems are of moderate to high dimensionality and thus one should expect them to be quite complex. One consequence of this is that most straightforward approaches (even well-conceived ones) are likely to lead to enormous computations for the best selection. The single most important part of the solution of a selection problem is the appropriate choice of the form for selection mapping. It is here that theories give the least guidance and that the art of problem solving is most crucial.", "We propose a new approach for understanding the algorithm-specific empiricalh ardness of NP-Hard problems. In this work we focus on the empirical hardness of the winner determination problem--an optimization problem arising in combinatorial auctions--when solved by ILOG's CPLEX software. We consider nine widely-used problem distributions and sample randomly from a continuum of parameter settings for each distribution. We identify a large number of distribution-nonspecific features of data instances and use statisticalregression techniques to learn, evaluate and interpret a function from these features to the predicted hardness of an instance.", "It is well known that the ratio of the number of clauses to the number of variables in a random k-SAT instance is highly correlated with the instance's empirical hardness. We consider the problem of identifying such features of random SAT instances automatically using machine learning. We describe and analyze models for three SAT solvers - kcnfs, oksolver and satz - and for two different distributions of instances: uniform random 3-SAT with varying ratio of clauses-to-variables, and uniform random 3-SAT with fixed ratio of clauses-to-variables. We show that surprisingly accurate models can be built in all cases. Furthermore, we analyze these models to determine which features are most useful in predicting whether an instance will be hard to solve. Finally we discuss the use of our models to build SATzilla, an algorithm portfolio for SAT.", "Different researchers hold different views of what the term meta-learning exactly means. The first part of this paper provides our own perspective view in which the goal is to build self-adaptive learners (i.e. learning algorithms that improve their bias dynamically through experience by accumulating meta-knowledge). The second part provides a survey of meta-learning as reported by the machine-learning literature. We find that, despite different views and research lines, a question remains constant: how can we exploit knowledge about learning (i.e. meta-knowledge) to improve the performance of learning algorithms? Clearly the answer to this question is key to the advancement of the field and continues being the subject of intensive research." ] }
0807.1494
2950910462
Algorithm selection is typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. In recent work, we adopted an online approach, in which a performance model is iteratively updated and used to guide selection on a sequence of problem instances. The resulting exploration-exploitation trade-off was represented as a bandit problem with expert advice, using an existing solver for this game, but this required the setting of an arbitrary bound on algorithm runtimes, thus invalidating the optimal regret of the solver. In this paper, we propose a simpler framework for representing algorithm selection as a bandit problem, with partial information, and an unknown bound on losses. We adapt an existing solver to this game, proving a bound on its expected regret, which holds also for the resulting algorithm selection technique. We present preliminary experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark.
A number of interesting dynamic exceptions to the static selection paradigm have been proposed recently. In @cite_25 , algorithm performance modeling is based on the behavior of the candidate algorithms during a predefined amount of time, called the observational horizon , and dynamic context-sensitive restart policies for SAT solvers are presented. In both cases, the model is learned offline. In a Reinforcement Learning @cite_16 setting, algorithm selection can be formulated as a Markov Decision Process: in @cite_6 , the algorithm set includes sequences of recursive algorithms, formed dynamically at run-time solving a sequential decision problem, and a variation of Q-learning is used to find a dynamic algorithm selection policy; the resulting technique is per instance, dynamic and online. In @cite_0 , a set of deterministic algorithms is considered, and, under some limitations, static and dynamic schedules are obtained, based on dynamic programming. In both cases, the method presented is per set, offline.
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_25", "@cite_6" ], "mid": [ "1552295314", "2121863487", "2148818117", "1492936035" ], "abstract": [ "Automatic specialization of algorithms to a limited domain is an interesting and industrially applicable problem. We calculate the optimal assignment of computational resources to several different solvers that solve the same problem. Optimality is considered with regard to the expected solution time on a set of problem instances from the domain of interest. We present two approaches, a static and dynamic one. The static approach leads to a simple analytically calculable solution. The dynamic approach results in formulation of the problem as a Markov Decision Process. Our tests on the SAT Problem show that the presented methods are quite effective. Therefore, both methods are attractive for applications and future research.", "Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.", "We describe theoretical results and empirical study of context-sensitive restart policies for randomized search procedures. The methods generalize previous results on optimal restart policies by exploiting dynamically updated beliefs about the probability distribution for run time. Rather than assuming complete knowledge or zero knowledge about the run-time distribution, we formulate restart policies that consider real-time observations about properties of instances and the solver's activity. We describe background work on the application of Bayesian methods to build predictive models for run time, introduce an optimal policy for dynamic restarts that considers predictions about run time, and perform a comparative Study of traditional fixed versus dynamic restart policies.", "Many computational problems can be solved by multiple algorithms, with different algorithms fastest for different problem sizes, input distributions, and hardware characteristics. We consider the problem of algorithm selection: dynamically choose an algorithm to attack an instance of a problem with the goal of minimizing the overall execution time. We formulate the problem as a kind of Markov decision process (MDP), and use ideas from reinforcement learning to solve it. This paper introduces a kind of MDP that models the algorithm selection problem by allowing multiple state transitions. The well known Q-learning algorithm is adapted for this case in a way that combines both Monte-Carlo and Temporal Difference methods. Also, this work uses, and extends in a way to control problems, the Least-Squares Temporal Difference algorithm (LSTD(0)) of Boyan. The experimental study focuses on the classic problems of order statistic selection and sorting. The encouraging results reveal the potential of applying learning methods to traditional computational problems." ] }
0807.1494
2950910462
Algorithm selection is typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. In recent work, we adopted an online approach, in which a performance model is iteratively updated and used to guide selection on a sequence of problem instances. The resulting exploration-exploitation trade-off was represented as a bandit problem with expert advice, using an existing solver for this game, but this required the setting of an arbitrary bound on algorithm runtimes, thus invalidating the optimal regret of the solver. In this paper, we propose a simpler framework for representing algorithm selection as a bandit problem, with partial information, and an unknown bound on losses. We adapt an existing solver to this game, proving a bound on its expected regret, which holds also for the resulting algorithm selection technique. We present preliminary experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark.
An approach based on runtime distributions can be found in @cite_27 @cite_30 , for parallel independent processes and shared resources respectively. The runtime distributions are assumed to be known, and the expected value of a cost function, accounting for both wall-clock time and resources usage, is minimized. A dynamic schedule is evaluated offline, using a branch-and-bound algorithm to find the optimal one in a tree of possible schedules. Examples of allocation to two processes are presented with artificially generated runtimes, and a real Latin square solver. Unfortunately, the computational complexity of the tree search grows exponentially in the number of processes.
{ "cite_N": [ "@cite_30", "@cite_27" ], "mid": [ "2139709118", "2068738275" ], "abstract": [ "The performance of anytime algorithms can be improved by simultaneously solving several instances of algorithm-problem pairs. These pairs may include different instances of a problem (such as starting from a different initial state), different algorithms (if several alternatives exist), or several runs of the same algorithm (for non-deterministic algorithms). In this paper we present a methodology for designing an optimal scheduling policy based on the statistical characteristics of the algorithms involved. We formally analyze the case where the processes share resources (a single-processor model), and provide an algorithm for optimal scheduling. We analyze, theoretically and empirically, the behavior of our scheduling algorithm for various distribution types. Finally, we present empirical results of applying our scheduling algorithm to the Latin Square problem.", "The performance of anytime algorithms having a nondeterministic nature can be improved by solving simultaneously several instances of the algorithm-problem pairs. These pairs may include different instances of a problem (like starting from a different initial state), different algorithms (if several alternatives exist), or several instances of the same algorithm (for nondeterministic algorithms).In this paper we present a general framework for optimal parallelization of independent processes. We show a mathematical model for this framework, present algorithms for optimal scheduling, and demonstrate its usefulness on a real problem." ] }
0807.1494
2950910462
Algorithm selection is typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. In recent work, we adopted an online approach, in which a performance model is iteratively updated and used to guide selection on a sequence of problem instances. The resulting exploration-exploitation trade-off was represented as a bandit problem with expert advice, using an existing solver for this game, but this required the setting of an arbitrary bound on algorithm runtimes, thus invalidating the optimal regret of the solver. In this paper, we propose a simpler framework for representing algorithm selection as a bandit problem, with partial information, and an unknown bound on losses. We adapt an existing solver to this game, proving a bound on its expected regret, which holds also for the resulting algorithm selection technique. We present preliminary experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark.
Low-knowledge'' oblivious approaches can be found in @cite_12 @cite_33 , in which various simple indicators of current solution improvement are used for algorithm selection, in order to achieve the best solution quality within a given time contract. In @cite_33 , the selection process is dynamic: machine time shares are based on a recency-weighted average of performance improvements. We adopted a similar approach in @cite_8 , where we considered algorithms with a scalar state, that had to reach a target value. The time to solution was estimated based on a shifting-window linear extrapolation of the learning curves.
{ "cite_N": [ "@cite_33", "@cite_12", "@cite_8" ], "mid": [ "2087393302", "1566901001", "1571953186" ], "abstract": [ "This paper addresses the question of allocating computational resources among a set of algorithms to achieve the best performance on scheduling problems. Our primary motivation in addressing this problem is to reduce the expertise needed to apply optimization technology. Therefore, we investigate algorithm control techniques that make decisions based only on observations of the improvement in solution quality achieved by each algorithm. We call our approach “low knowledge” since it does not rely on complex prediction models, either of the problem domain or of algorithm behavior. We show that a low-knowledge approach results in a system that achieves significantly better performance than all of the pure algorithms without requiring additional human expertise. Furthermore the low-knowledge approach achieves performance equivalent to a perfect high-knowledge classification approach.", "This paper addresses the question of selecting an algorithm from a predefined set that will have the best performance on a scheduling problem instance. Our goal is to reduce the expertise needed to apply constraint technology. Therefore, we investigate simple rules that make predictions based on limited problem instance knowledge. Our results indicate that it is possible to achieve superior performance over choosing the algorithm that performs best on average on the problem set. The results hold over a variety of different run lengths and on different types of scheduling problems and algorithms. We argue that low-knowledge approaches are important in reducing expertise required to exploit optimization technology.", "Given is a search problem or a sequence of search problems, as well as a set of potentially useful search algorithms. We propose a general framework for online allocation of computation time to search algorithms based on experience with their performance so far. In an example instantiation, we use simple linear extrapolation of performance for allocating time to various simultaneously running genetic algorithms characterized by different parameter values. Despite the large number of searchers tested in parallel, on various tasks this rather general approach compares favorably to a more specialized state-of-the-art heuristic; in one case it is nearly two orders of magnitude faster." ] }
0807.1494
2950910462
Algorithm selection is typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. In recent work, we adopted an online approach, in which a performance model is iteratively updated and used to guide selection on a sequence of problem instances. The resulting exploration-exploitation trade-off was represented as a bandit problem with expert advice, using an existing solver for this game, but this required the setting of an arbitrary bound on algorithm runtimes, thus invalidating the optimal regret of the solver. In this paper, we propose a simpler framework for representing algorithm selection as a bandit problem, with partial information, and an unknown bound on losses. We adapt an existing solver to this game, proving a bound on its expected regret, which holds also for the resulting algorithm selection technique. We present preliminary experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark.
For optimisation problems, if selection is only aimed at maximizing solution quality, the same problem instance can be solved multiple times, keeping only the best solution. In this case, algorithm selection can be represented as a @math -armed bandit problem, a variant of the game in which the reward attributed to each arm is the maximum payoff on a set of rounds. Solvers for this game are used in @cite_36 @cite_14 to implement oblivious per instance selection from a set of multi-start optimisation techniques: each problem is treated independently, and multiple runs of the available solvers are allocated, to maximize performance quality. Further references can be found in @cite_10 .
{ "cite_N": [ "@cite_36", "@cite_14", "@cite_10" ], "mid": [ "1587673378", "159910205", "2038372264" ], "abstract": [ "The multiarmed bandit is often used as an analogy for the tradeoff between exploration and exploitation in search problems. The classic problem involves allocating trials to the arms of a multiarmed slot machine to maximize the expected sum of rewards. We pose a new variation of the multiarmed bandit--the Max K-Armed Bandit--in which trials must be allocated among the arms to maximize the expected best single sample reward of the series of trials. Motivation for the Max K-Armed Bandit is the allocation of restarts among a set of multistart stochastic search algorithms. We present an analysis of this Max K-Armed Bandit showing under certain assumptions that the optimal strategy allocates trials to the observed best arm at a rate increasing double exponentially relative to the other arms. This motivates an exploration strategy that follows a Boltzmann distribution with an exponentially decaying temperature parameter. We compare this exploration policy to policies that allocate trials to the observed best arm at rates faster (and slower) than double exponentially. The results confirm, for two scheduling domains, that the double exponential increase in the rate of allocations to the observed best heuristic outperfonns the other approaches.", "We present an asymptotically optimal algorithm for the max variant of the k-armed bandit problem. Given a set of k slot machines, each yielding payoff from a fixed (but unknown) distribution, we wish to allocate trials to the machines so as to maximize the expected maximum payoff received over a series of n trials. Subject to certain distributional assumptions, we show that O(k ln(k δ) ln(n)2 e2) trials are sufficient to identify, with probability at least 1 - δ, a machine whose expected maximum payoff is within e of optimal. This result leads to a strategy for solving the problem that is asymptotically optimal in the following sense: the gap between the expected maximum payoff obtained by using our strategy for n trials and that obtained by pulling the single best arm for all n trials approaches zero as n → ∞.", "Algorithm selection can be performed using a model of runtime distribution, learned during a preliminary training phase. There is a trade-off between the performance of model-based algorithm selection, and the cost of learning the model. In this paper, we treat this trade-off in the context of bandit problems. We propose a fully dynamic and online algorithm selection technique, with no separate training phase: all candidate algorithms are run in parallel, while a model incrementally learns their runtime distributions. A redundant set of time allocators uses the partially trained model to propose machine time shares for the algorithms. A bandit problem solver mixes the model-based shares with a uniform share, gradually increasing the impact of the best time allocators as the model improves. We present experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark; and with a set of solvers for the Auction Winner Determination problem." ] }
0807.1496
2951160153
Motivated by the problem of routing reliably and scalably in a graph, we introduce the notion of a splicer, the union of spanning trees of a graph. We prove that for any bounded-degree n-vertex graph, the union of two random spanning trees approximates the expansion of every cut of the graph to within a factor of O(log n). For the random graph G_ n,p , for p> c log n n, two spanning trees give an expander. This is suggested by the case of the complete graph, where we prove that two random spanning trees give an expander. The construction of the splicer is elementary -- each spanning tree can be produced independently using an algorithm by Aldous and Broder: a random walk in the graph with edges leading to previously unvisited vertices included in the tree. A second important application of splicers is to graph sparsification where the goal is to approximate every cut (and more generally the quadratic form of the Laplacian) using only a small subgraph of the original graph. Benczur-Karger as well as Spielman-Srivastava have shown sparsifiers with O(n log n eps^2)$ edges that achieve approximation within factors 1+eps and 1-eps. Their methods, based on independent sampling of edges, need Omega(n log n) edges to get any approximation (else the subgraph could be disconnected) and leave open the question of linear-size sparsifiers. Splicers address this question for random graphs by providing sparsifiers of size O(n) that approximate every cut to within a factor of O(log n).
The idea of using multiple routing trees and switching between them is inspired by the work of @cite_7 who proposed a multi-path extension to standard tree-based routing. The method, called Path Splicing , computes multiple trees to each destination vertex, using simple methods to generate the trees; in one variant, each tree is a shortest path tree computed on a randomly perturbed set of edge weights. Path splicing appears to do extremely well in simulations, approaching the reliability of the underlying graph using only a small number of trees It has several other features from a practical viewpoint, such as allowing end vertices to specify paths, that we do not discuss in detail here. .
{ "cite_N": [ "@cite_7" ], "mid": [ "2401048951" ], "abstract": [ "We present path splicing, a primitive that constructs network paths from multiple independent routing processes that run over a single network topology. The routing processes compute distinct routing trees using randomly perturbed link weights. A few additional bits in packet headers give end systems access to a large number of paths. By changing these bits, nodes can redirect traffic without detailed knowledge of network paths. Assembling paths by “splicing” segments can yield up to an exponential improvement in path diversity for only a linear increase in storage and message complexity. We present randomized approaches for slice construction and failure recovery that achieve nearoptimal performance and are extremely simple to configure. Our evaluation of path splicing on realistic ISP topologies demonstrates a dramatic increase in reliability that approaches the best possible using only a small number of slices and for only a small increase in latency." ] }
0807.1496
2951160153
Motivated by the problem of routing reliably and scalably in a graph, we introduce the notion of a splicer, the union of spanning trees of a graph. We prove that for any bounded-degree n-vertex graph, the union of two random spanning trees approximates the expansion of every cut of the graph to within a factor of O(log n). For the random graph G_ n,p , for p> c log n n, two spanning trees give an expander. This is suggested by the case of the complete graph, where we prove that two random spanning trees give an expander. The construction of the splicer is elementary -- each spanning tree can be produced independently using an algorithm by Aldous and Broder: a random walk in the graph with edges leading to previously unvisited vertices included in the tree. A second important application of splicers is to graph sparsification where the goal is to approximate every cut (and more generally the quadratic form of the Laplacian) using only a small subgraph of the original graph. Benczur-Karger as well as Spielman-Srivastava have shown sparsifiers with O(n log n eps^2)$ edges that achieve approximation within factors 1+eps and 1-eps. Their methods, based on independent sampling of edges, need Omega(n log n) edges to get any approximation (else the subgraph could be disconnected) and leave open the question of linear-size sparsifiers. Splicers address this question for random graphs by providing sparsifiers of size O(n) that approximate every cut to within a factor of O(log n).
On the other hand, our result for the union of spanning trees of bounded degree graphs doesn't seem to have any analog for the union of matchings. Indeed, generating random perfect matchings of graphs is a highly nontrivial problem, the case of computing the permanent of 0--1 matrices being the special case for bipartite graphs @cite_14 .
{ "cite_N": [ "@cite_14" ], "mid": [ "2161611531" ], "abstract": [ "We present a polynomial-time randomized algorithm for estimating the permanent of an arbitrary n × n matrix with nonnegative entries. This algorithm---technically a \"fully-polynomial randomized approximation scheme\"---computes an approximation that is, with high probability, within arbitrarily small specified relative error of the true value of the permanent." ] }
0806.1918
2950601786
The social Web is transforming the way information is created and distributed. Blog authoring tools enable users to publish content, while sites such as Digg and Del.icio.us are used to distribute content to a wider audience. With content fast becoming a commodity, interest in using social networks to promote and find content has grown, both on the side of content producers (viral marketing) and consumers (recommendation). Here we study the role of social networks in promoting content on Digg, a social news aggregator that allows users to submit links to and vote on news stories. Digg's goal is to feature the most interesting stories on its front page, and it aggregates opinions of its many users to identify them. Like other social networking sites, Digg allows users to designate other users as friends'' and see what stories they found interesting. We studied the spread of interest in news stories submitted to Digg in June 2006. Our results suggest that pattern of the spread of interest in a story on the network is indicative of how popular the story will become. Stories that spread mainly outside of the submitter's neighborhood go on to be very popular, while stories that spread mainly through submitter's social neighborhood prove not to be very popular. This effect is visible already in the early stages of voting, and one can make a prediction about the potential audience of a story simply by analyzing where the initial votes come from.
Our findings are in line with conclusions of previous studies that showed that social networks play an important role in promoting and locating content @cite_18 @cite_0 @cite_15 . In particular, Lerman @cite_15 showed that users with larger social networks are more successful in getting their stories promoted to Digg's front page, even if the stories are not very interesting. These findings have implications for the design of social media and social networking sites. For example, some implementations of social recommendation may lead to the tyranny of the minority,'' where a small group of active, well-connected users dominate the site @cite_7 . Rather than being a liability, social networks can be used to, for example, more accurately assess the quality of content, as this paper shows.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_7", "@cite_15" ], "mid": [ "2115022330", "", "2134713486", "2130726349" ], "abstract": [ "Online social networking sites like Orkut, YouTube, and Flickr are among the most popular sites on the Internet. Users of these sites form a social network, which provides a powerful means of sharing, organizing, and finding content and contacts. The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale. Understanding these graphs is important, both to improve current systems and to design new applications of online social networks. This paper presents a large-scale measurement study and analysis of the structure of multiple online social networks. We examine data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut. We crawled the publicly accessible user links on each site, obtaining a large portion of each social network's graph. Our data set contains over 11.3 million users and 328 million links. We believe that this is the first study to examine multiple online social networks at scale. Our results confirm the power-law, small-world, and scale-free properties of online social networks. We observe that the indegree of user nodes tends to match the outdegree; that the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree nodes at the fringes of the network. Finally, we discuss the implications of these structural properties for the design of social network based systems.", "", "The social news aggregator Digg allows users to submit and moderate stories by voting on (digging) them. As is true of most social sites, user participation on Digg is non-uniformly distributed, with few users contributing a disproportionate fraction of content. We studied user participation on Digg, to see whether it is motivated by competition, fueled by user ranking, or social factors, such as community acceptance. For our study we collected activity data of the top users weekly over the course of a year. We computed the number of stories users submitted, dugg or commented on weekly. We report a spike in user activity in September 2006, followed by a gradual decline, which seems unaffected by the elimination of user ranking. The spike can be explained by a controversy that broke out at the beginning of September 2006. We believe that the lasting acrimony that this incident has created led to a decline of top user participation on Digg.", "Social media sites underscore the Web's transformation to a participatory medium in which users collaboratively create, evaluate, and distribute information. Innovations in social media have led to social information processing, a new paradigm for interacting with data. The social news aggregator Digg exploits social information processing for document recommendation and rating. Additionally, via mathematical modeling, it's possible to describe how collaborative document rating emerges from the independent decisions users make. Using such a model, the author reproduces observed ratings that actual stories on Digg have received." ] }
0806.1918
2950601786
The social Web is transforming the way information is created and distributed. Blog authoring tools enable users to publish content, while sites such as Digg and Del.icio.us are used to distribute content to a wider audience. With content fast becoming a commodity, interest in using social networks to promote and find content has grown, both on the side of content producers (viral marketing) and consumers (recommendation). Here we study the role of social networks in promoting content on Digg, a social news aggregator that allows users to submit links to and vote on news stories. Digg's goal is to feature the most interesting stories on its front page, and it aggregates opinions of its many users to identify them. Like other social networking sites, Digg allows users to designate other users as friends'' and see what stories they found interesting. We studied the spread of interest in news stories submitted to Digg in June 2006. Our results suggest that pattern of the spread of interest in a story on the network is indicative of how popular the story will become. Stories that spread mainly outside of the submitter's neighborhood go on to be very popular, while stories that spread mainly through submitter's social neighborhood prove not to be very popular. This effect is visible already in the early stages of voting, and one can make a prediction about the potential audience of a story simply by analyzing where the initial votes come from.
Other researchers have used Digg's trove of empirical data to study dynamics of voting. Wu and Huberman @cite_1 found that interest in a story peaks when the story first hits the front page, and then decays with time, with a half-life of about a day. Their study is complementary to ours, as they studied dynamics of stories they hit the front page. Also, they do not identify a mechanism for the spread of interest in a story. We, on the other hand, propose, and empirically study, social networks as a mechanism for the spread of interest in a story. Crane and Sornette @cite_14 analyzed a large number of videos posted on YouTube. By looking at the dynamics of the number of votes received by the videos, they found that they could identify high quality videos, whether they were selected by YouTube editors, or spontaneously became popular. Like Wu and Huberman, they looked at aggregate statistics, not the microscopic dynamics of the spread of interest in stories.
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2406592771", "2058465497" ], "abstract": [ "With the rise of web 2.0 there is an ever-expanding source of interesting media because of the proliferation of usergenerated content. However, mixed in with this is a large amount of noise that creates a proverbial “needle in the haystack” when searching for relevant content. Although there is hope that the rich network of interwoven metadata may contain enough structure to eventually help sift through this noise, currently many sites serve up only the “most popular” things. Identifying only the most popular items can be useful, but doing so fails to take into account the famous “long tail” behavior of the web—the notion that the collective effect of small, niche interests can outweigh the market share of the few blockbuster (i.e. most-popular) items—thus providing only content that has mass appeal and masking the interests of the idiosyncratic many. YouTube, for example, hosts over 40 million videos— enough content to keep one occupied for more than 200 years. Are there intelligent tools to search through this information-rich environment and identify interesting and relevant content? Is there a way to identify emerging trends or “hot topics” in addition to indexing the long tail for content that has real value?", "The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics of collective attention among 1 million users of an interactive web site, digg.com, devoted to thousands of novel news stories. The observations can be described by a dynamical model characterized by a single novelty factor. Our measurements indicate that novelty within groups decays with a stretched-exponential law, suggesting the existence of a natural time scale over which attention fades." ] }
0806.3542
1484545291
This paper proposes and analyzes a distributed MAC protocol that achieves zero collision with no control message exchange nor synchronization. ZC (ZeroCollision) is neither reservation-based nor dynamic TDMA; the protocol supports variable-length packets and does not lose efficiency when some of the stations do not transmit. At the same time, ZC is not a CSMA; in its steady state, it is completely collision-free. The stations transmit repeatedly in a round-robin order once the convergence state is reached. If some stations skip their turn, their transmissions are replaced by idle @math -second mini-slots that enable the other stations to keep track of their order. Because of its short medium access delay and its efficiency, the protocol supports both real-time and elastic applications. The protocol allows for nodes leaving and joining the network; it can allocate more throughput to specific nodes (such as an access point). The protocol is robust against carrier sensing errors or clock drift. While collision avoidance is guaranteed in a single collision domain, it is not the case in a multiple collision one. However, experiments show ZC supports a comparable amount of goodput to CSMA in a multiple collision domain environment. The paper presents an analysis and extensive simulations of the protocol, confirming that ZC outperforms both CSMA and TDMA at high and low load.
In a reservation or dynamic TDMA protocol, a device reserves some future epochs to transmit its packets. The random reservation TDMA protocols have many variations. R-ALOHA @cite_28 , PRMA @cite_16 and their vast number of derivatives adopts reserve-on-success.'' There, time is divided into a sequence of frames and each frame consists of fixed-length slots. If a trial transmission is successful during a non-reserved slot within a frame, the corresponding slot of the following frames are regarded as reserved. @cite_7 @cite_1 analyze performance and stability of this class of protocols.
{ "cite_N": [ "@cite_28", "@cite_16", "@cite_1", "@cite_7" ], "mid": [ "112099158", "2123136297", "2019737634", "2127340503" ], "abstract": [ "", "Simulation work is reported indicating that packet reservation multiple access (PRMA) allows a variety of information sources to share the same wireless access channel. Some of the sources, such as speech terminals, are classified as periodic and others, such as signaling, are classified as random. Packets from all sources contend for access to channel time slots. When a periodic information terminal succeeds in gaining access, it reserves subsequent time slots for uncontested transmission. Both computer simulations and a listening test reveal that PRMA achieves a promising combination of voice quality and bandwidth efficiency. >", "The dynamic behavior of the R-ALOHA packet broadcast system with multipacket messages is analyzed in this paper. It is assumed that each user handles one message at a time and the number of packets in a message is geometrically distributed. A Markovian model of the system is first formulated which explicitly contains the influence of the propagation delay of the broadcast channel. An approximate technique called equilibrium point analysis (EPA) is utilized to analyze the multidimensional Markov chain. The system stability behavior and the throughput-average message delay performance are demonstrated by the EPA. Numerical results from both analysis and simulation are given to assess the accuracy of the analytic results. Applying the analytic results to the slotted ALOHA with single packet messages, we prove mathematically that a method by Kleinrock and Lam for taking into account the influence of the propagation delay is an excellent approximation.", "This paper deals with a random reservation TDMA protocol able to support constant bit rate services as well as variable bit rate services. In particular, voice communications and data transmissions are considered. Voice terminals have a higher priority assigned than data terminals in accessing the shared channel. A suitable analytical approach is proposed in order to evaluate the data and voice subsystem performance. Comparisons to the well known PRMA scheme are also given in order to highlight the superior performance of the proposed approach in terms of maximum data load and overall throughput." ] }
0806.3542
1484545291
This paper proposes and analyzes a distributed MAC protocol that achieves zero collision with no control message exchange nor synchronization. ZC (ZeroCollision) is neither reservation-based nor dynamic TDMA; the protocol supports variable-length packets and does not lose efficiency when some of the stations do not transmit. At the same time, ZC is not a CSMA; in its steady state, it is completely collision-free. The stations transmit repeatedly in a round-robin order once the convergence state is reached. If some stations skip their turn, their transmissions are replaced by idle @math -second mini-slots that enable the other stations to keep track of their order. Because of its short medium access delay and its efficiency, the protocol supports both real-time and elastic applications. The protocol allows for nodes leaving and joining the network; it can allocate more throughput to specific nodes (such as an access point). The protocol is robust against carrier sensing errors or clock drift. While collision avoidance is guaranteed in a single collision domain, it is not the case in a multiple collision one. However, experiments show ZC supports a comparable amount of goodput to CSMA in a multiple collision domain environment. The paper presents an analysis and extensive simulations of the protocol, confirming that ZC outperforms both CSMA and TDMA at high and low load.
Some protocols as in @cite_22 and @cite_13 are classified as dynamic TDMA. Time is divided into a sequence of frames and each frame consists of at least two constant-length phases - one control phase which is used for competition for slot allocation via random access, auction @cite_6 @cite_0 @cite_29 , or distributed election @cite_10 and the remaining phase(s) which is (are) devoted to data @cite_13 . Stations are synchronized at start of each phase, except for some cases as in @cite_19 and @cite_5 . TRAMA @cite_10 , MMAC @cite_19 and their derivatives belong to this class in the context of wireless sensor networks.
{ "cite_N": [ "@cite_22", "@cite_10", "@cite_29", "@cite_6", "@cite_0", "@cite_19", "@cite_5", "@cite_13" ], "mid": [ "1995791248", "1986917282", "2020486957", "", "2153460996", "2118487614", "2073536040", "" ], "abstract": [ "The Wideband (packet satellite) network is an experimental 3 Mbit s communications system developed under sponsorship of the Defense Advanced Research Projects Agency and the Defense Communications Agency. This system is being used to evaluate the use of packet transmission for efficient voice communication, voice conferencing, and integration of voice and data over a satellite channel. Each station in the Wideband network consists of an earth terminal (dedicated 5 m antenna plus associated IF RF equipment), a burst-modem and codec unit, and a station controller. Station controllers provide interfaces to host computers (including packet speech sources) and manage the allocation of the satellite channel on a TDMA demand-assigned basis. TDMA demand-assignment is implemented using a reservation-based packet-oriented protocol capableof handling traffic at multiple priority levels. The channel protocol provides a reservation-per-message mode of service (datagrams) to support transmission from bursty traffic sources and a reservation-per-call mode of service (streams) to support traffic with more regular arrival statisticS (e.g., vioce). A distributed scheduler running in every station controller eliminates the need for a central control station and minimizes network transit delay for datagram transmission as well as stream creation, modification, and deletion. In this paper we describe the protocols and mechanisms upon which the Wideband packet satellite network is based.", "The traffic-adaptive medium access protocol (TRAMA) is introduced for energy-efficient collision-free channel access in wireless sensor networks. TRAMA reduces energy consumption by ensuring that unicast, multicast, and broadcast transmissions have no collisions, and by allowing nodes to switch to a low-power, idle state whenever they are not transmitting or receiving. TRAMA assumes that time is slotted and uses a distributed election scheme based on information about the traffic at each node to determine which node can transmit at a particular time slot. TRAMA avoids the assignment of time slots to nodes with no traffic to send, and also allows nodes to determine when they can become idle and not listen to the channel using traffic information. TRAMA is shown to be fair and correct, in that no idle node is an intended receiver and no receiver suffers collisions. The performance of TRAMA is evaluated through extensive simulations using both synthetic- as well as sensor-network scenarios. The results indicate that TRAMA outperforms contention-based protocols (e.g., CSMA, 802.11 and S-MAC) as well as scheduling-based protocols (e.g., NAMA) with significant energy savings.", "In this paper, we present a new collision-free MAC protocol-carrier sense media access with ID countdown (CSMA IC) for ad hoc wireless networks that can achieve 100 collision-free performance by solving the \"hidden terminal\" problem and the concurrent sending problem. Compared to CSMA IC of IEEE 802.11, it also improves the network's performance in decreasing the network's throughput significantly. Furthermore, it can enable different packets with different priority to access the media and thus gain QoS.", "", "Although collision free TDMA schemes have been proposed and used for more than two decades, an important ingredient of these schemes, the initialization of stations (that is, assigning ID numbers 1,2,...,n) was not investigated until recently. Binary and n-ary partitioning algorithms were recently proposed for the case of stations with collision detection capability. The main contribution of this paper is a new randomized hybrid initialization protocol which combines the two partitioning algorithms into a more efficient one. The new scheme optimizes the binary partition protocol for small values of n (e.g. n=2, 3, 4). The hybrid scheme then applies n-ary partition protocol on the whole set, followed by binary partition on the stations that caused collision. We proved analytically that the expected number of time slots in the hybrid algorithm with known number of users is <2.20? n. Performance of these algorithms was also evaluated experimentally by comparing it with existing algorithms, and an improvement from e? n to approximately 2.15? n was obtained.", "Mobility in wireless sensor networks poses unique challenges to the medium access control (MAC) protocol design. Previous MAC protocols for sensor networks assume static sensor nodes and focus on energy-efficiency. In this paper, we present a mobility-adaptive, collision-free medium access control protocol (MMAC) for mobile sensor networks. MMAC caters for both weak mobility (e.g., topology changes, node joins, and node failures) and strong mobility (e.g., concurrent node joins and failures, and physical mobility of nodes). MMAC is a scheduling-based protocol and thus it guarantees collision avoidance. MMAC allows nodes the transmission rights at particular time-slots based on the traffic information and mobility pattern of the nodes. Simulation results indicate that the performance of MMAC is equivalent to that of TRAMA in static sensor network environments. In sensor networks with mobile nodes or high network dynamics, MMAC outperforms existing MAC protocols, like TRAM A and S-MAC, in terms of energy-efficiency, delay, and packet delivery.", "A centralized, integrated voice data radio network for fading multipath indoor radio channels is proposed and analyzed. The packets of voice and data are integrated through a movable boundary method. The uplink channel access uses a framed-polling protocol whereas the downlink uses a time-division multiple-access (TDMA) scheme. This system dynamically switches between two transmission rates and uses multiple antennas to maximize the throughput in the fading multipath indoor environment. Throughput and delay characteristics of the system are analyzed using four different techniques. The results are compared with those of Monte Carlo computer simulations. A simple relationship between the number of voice terminals and the throughput of the data traffic are derived for an upper bound of 10-ms delay for the data packets. >", "" ] }
0806.3542
1484545291
This paper proposes and analyzes a distributed MAC protocol that achieves zero collision with no control message exchange nor synchronization. ZC (ZeroCollision) is neither reservation-based nor dynamic TDMA; the protocol supports variable-length packets and does not lose efficiency when some of the stations do not transmit. At the same time, ZC is not a CSMA; in its steady state, it is completely collision-free. The stations transmit repeatedly in a round-robin order once the convergence state is reached. If some stations skip their turn, their transmissions are replaced by idle @math -second mini-slots that enable the other stations to keep track of their order. Because of its short medium access delay and its efficiency, the protocol supports both real-time and elastic applications. The protocol allows for nodes leaving and joining the network; it can allocate more throughput to specific nodes (such as an access point). The protocol is robust against carrier sensing errors or clock drift. While collision avoidance is guaranteed in a single collision domain, it is not the case in a multiple collision one. However, experiments show ZC supports a comparable amount of goodput to CSMA in a multiple collision domain environment. The paper presents an analysis and extensive simulations of the protocol, confirming that ZC outperforms both CSMA and TDMA at high and low load.
There are protocols utilizing out-of-band signalling such as BTMA @cite_11 , DBTMA @cite_20 or multichannel control such as SRMA @cite_25 BROADEN @cite_18 . While those schemes enable distributed collision-free medium access, they demand multiple radios per device so it incurs complexity and cost. Therefore in our research, we focus on only the single radio case.
{ "cite_N": [ "@cite_18", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "1741095448", "2110504193", "2140103665", "2142076918" ], "abstract": [ "In this paper, we investigate a new class of collision-prevention MAC protocols, called carrier sense multiple access with collision prevention (CSMA CP), for wireless ad hoc networks. We proposed a collision-free MAC protocol called BROADEN based on CSMA CP. To the best of our knowledge, BROADEN is the first distributed MAC protocol that can achieve 100 collision-free transmissions in both the control channel and data channel in multihop ad hoc networks. Furthermore, BROADEN can solve the hidden, exposed, and moving terminal problems at the same time. It also improves the performance of previous MAC protocols in terms of average packet delay and network throughput. Moreover, our protocol effectively supports quality-of-service (QoS) provisioning based on prioritization and reservation.", "Here we continue the analytic study of packet switching in radio channels which we reported upon m our two previous papers [1], [2] Again we consider a population of terminals communicating with a central station over a packet-switched radio channel. The allocation of bandwidth among the contending terminals can be fixed [e.g., time-division multiple access (TDMA) or frequency-division multiple access (FDMA)], random [e.g., ALOHA or carrier sense multiple access (CSMA)] or centrally controlled (e.g., polling or reservation). In this paper we show that with a large population of bursty users, (as expected) random access is superior to both fixed assignment and polling. We also introduce and analyze a dynamic reservation technique which we call split-channel reservation multiple access (SRMA) which is interesting in that it is both simple and efficient over a large range of system parameters.", "In ad hoc networks, the hidden- and the exposed-terminal problems can severely reduce the network capacity on the MAC layer. To address these problems, the ready-to-send and clear-to-send (RTS CTS) dialogue has been proposed in the literature. However, MAC schemes using only the RTS CTS dialogue cannot completely solve the hidden and the exposed terminal problems, as pure \"packet sensing\" MAC schemes are not safe even in fully connected networks. We propose a new MAC protocol, termed the dual busy tone multiple access (DBTMA) scheme. The operation of the DBTMA protocol is based on the RTS packet and two narrow-bandwidth, out-of-band busy tones. With the use of the RTS packet and the receive busy tone, which is set up by the receiver, our scheme completely solves the hidden- and the exposed-terminal problems. The busy tone, which is set up by the transmitter, provides protection for the RTS packets, increasing the probability of successful RTS reception and, consequently, increasing the throughput. This paper outlines the operation rules of the DBTMA scheme and analyzes its performance. Simulation results are also provided to support the analytical results. It is concluded that the DBTMA protocol is superior to other schemes that rely on the RTS CTS dialogue on a single channel or to those that rely on a single busy tone. As a point of reference, the DBTMA scheme out-performs FAMA-NCS by 20-40 in our simulations using the network topologies borrowed from the FAMA-NCS paper. In an ad hoc network with a large coverage area, DBTMA achieves performance gain of 140 over FAMA-NCS and performance gain of 20 over RI-BTMA.", "We consider a population of terminals communicating with a central station over a packet-switched multiple-access radio channel. The performance of carrier sense multiple access (CSMA) [1] used as a method for multiplexing these terminals is highly dependent on the ability of each terminal to sense the carrier of any other transmission on the channel. Many situations exist in which some terminals are \"hidden\" from each other (either because they are out-of-sight or out-of-range). In this paper we show that the existence of hidden terminals significantly degrades the performance of CSMA. Furthermore, we introduce and analyze the busy-tone multiple-access (BTMA) mode as a natural extension of CSMA to eliminate the hidden-terminal problem. Numerical results giving the bandwidth utilization and packet delays are shown, illustrating that BTMA with hidden terminals performs almost as well as CSMA without hidden terminals." ] }
0806.3542
1484545291
This paper proposes and analyzes a distributed MAC protocol that achieves zero collision with no control message exchange nor synchronization. ZC (ZeroCollision) is neither reservation-based nor dynamic TDMA; the protocol supports variable-length packets and does not lose efficiency when some of the stations do not transmit. At the same time, ZC is not a CSMA; in its steady state, it is completely collision-free. The stations transmit repeatedly in a round-robin order once the convergence state is reached. If some stations skip their turn, their transmissions are replaced by idle @math -second mini-slots that enable the other stations to keep track of their order. Because of its short medium access delay and its efficiency, the protocol supports both real-time and elastic applications. The protocol allows for nodes leaving and joining the network; it can allocate more throughput to specific nodes (such as an access point). The protocol is robust against carrier sensing errors or clock drift. While collision avoidance is guaranteed in a single collision domain, it is not the case in a multiple collision one. However, experiments show ZC supports a comparable amount of goodput to CSMA in a multiple collision domain environment. The paper presents an analysis and extensive simulations of the protocol, confirming that ZC outperforms both CSMA and TDMA at high and low load.
The closest relative of ZC is MSAP @cite_12 . In MSAP, the sequence of mini slots plays a role analogous to a silent polling sequence and the station with token does not release the channel until it empties its buffer. Moreover, in MSAP the access sequence is assumed to be predefined and shared by all stations ahead of time, and no distributed self-stabilizing process is proposed. So MSAP cannot recover from any dynamic event or carrier sensing error. In contrast, ZC is a simple but robust self-stabilizing, solving the time assignment problem in an effective way. BRAM @cite_14 and SUPBRAM @cite_9 are extensions of MSAP with varying length of idle mini slots, but these protocols require packet decoding and global synchronization at every node.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_12" ], "mid": [ "2103482874", "2052364572", "2103221758" ], "abstract": [ "We describe an adaptive multiaccess channel protocol for use in radio networks with an arbitrary distribution of stationary hidden nodes, which provides the nodes with controlled, collision-free access to the channel. The protocol can be considered to belong to the BRAM [5] protocol family, but differs in significant ways from BRAM. In this paper we describe the tenets of the protocol, then develop the protocol, and finally develop analytic expressions for its expected throughput and delay performance. Given these delay-throughput expressions, we show how protocol \"delay\" optimization can be achieved by dynamic adjustment of a protocol parameter as the network traffic load changes.", "In this paper, we first present the broadcast recognizing access method (BRAM), an access protocol suitable for regulating internode communication in either a radio or (coaxial or fiber) cable based communication system. The method avoids collisions, imposes negligible computational requirements on the nodes attempting to transmit, and is fair in the sense that no node will be indefinitely prevented from transmitting. Next we introduce parametric BRAM which attempts to balance the length of inserted channel idle periods, resulting from scheduling effects, against the probability of allowed message collisions. We show that parametric BRAM can be used to realize a method which balances inserted channel idle time against the probability of message collision to yield enhanced performance. For high message loads, parametric BRAM converges to BRAM, while for low and medium loadings it yields throughputs in excess of BRAM, and other methods. Both BRAM and parametric BRAM are discussed under the assumption of homogeneous message arrival rates at the nodes. We conclude by showing how the parametric BRAM can be applied when the nodes operate with heterogeneous or mixed message arrival rates.", "We study new access schemes for a population of geographically distributed data users who communicate with each other and or with a central station over a multiple-access broadcast ground radio packet-switching channel. We introduce and analyze alternating priorities (AP), round robin (RR), and random order (RO) as new conflict-free methods for multiplexing buffered users without control from a central station. These methods are effective when the number of users is not too large; as the number grows, a large overhead leads to a performance degradation. To reduce this degradation, we consider a natural extension of AP, called minislotted alternating priorities (MSAP) which reduces the overhead and is superior to fixed assignment, polling, and known random access schemes under heavy traffic conditions. At light input loads, only random access schemes outperform MSAP when we have a large population of users. In addition, and of major importance, is the fact that MSAP does not require control from a central station." ] }
0805.3747
2949310701
Many social Web sites allow users to publish content and annotate with descriptive metadata. In addition to flat tags, some social Web sites have recently began to allow users to organize their content and metadata hierarchically. The social photosharing site Flickr, for example, allows users to group related photos in sets, and related sets in collections. The social bookmarking site Del.icio.us similarly lets users group related tags into bundles. Although the sites themselves don't impose any constraints on how these hierarchies are used, individuals generally use them to capture relationships between concepts, most commonly the broader narrower relations. Collective annotation of content with hierarchical relations may lead to an emergent classification system, called a folksonomy. While some researchers have explored using tags as evidence for learning folksonomies, we believe that hierarchical relations described above offer a high-quality source of evidence for this task. We propose a simple approach to aggregate shallow hierarchies created by many distinct Flickr users into a common folksonomy. Our approach uses statistics to determine if a particular relation should be retained or discarded. The relations are then woven together into larger hierarchies. Although we have not carried out a detailed quantitative evaluation of the approach, it looks very promising since it generates very reasonable, non-trivial hierarchies.
Many researchers have studied the problem of extracting ontological relations from text, , @cite_15 @cite_5 @cite_1 . These works exploit linguistic patterns to infer if two keywords are related under a certain relationship. For instance, they use such as'' ( vehicles, such as cars'') to learn hyponym relations. Cimiano @cite_8 also applies linguistic patterns to extract object properties and then uses Formal Concept Analysis (FCA) to infer conceptual hierarchies. In FCA, a given object consists of a set of attributes and some attributes are common to a subset of objects. A concept A' subsumes concept B' if all objects in B' (with some common attributes) are also in A'. However, these approaches are not applicable to the metadata on the social Web such as tags, bundles and photo sets, which are ungrammatical and unstructured.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_1", "@cite_8" ], "mid": [ "2013109830", "2068737686", "2145328028", "2123982464" ], "abstract": [ "The recognition of names and their associated categories within unstructured text traditionally relies on semantic lexicons and gazetteers. The amount of effort required to assemble large lexicons confines the recognition to either a limited domain (e.g., medical imaging), or a small set of pre-defined, broader categories of interest (e.g., persons, countries, organizations, products). This constitutes a serious limitation in an information seeking context. In this case, the categories of potential interest to users are more diverse (universities, agencies, retailers, celebrities), often refined (e.g., SLR digital cameras, programming languages, multinational oil companies), and usually overlapping (e.g., the same entity may be concurrently a brand name, a technology company, and an industry leader). We present a lightly supervised method for acquiring named entities in arbitrary categories. The method applies lightweight lexico-syntactic extraction patterns to the unstructured text of Web documents. The method is a departure from traditional approaches to named entity recognition in that: 1) it does not require any start-up seed names or training; 2) it does not encode any domain knowledge in its extraction patterns; 3) it is only lightly supervised, and data-driven; 4) it does not impose any a-priori restriction on the categories of extracted names. We illustrate applications of the method in Web search, and describe experiments on 500 million Web documents and news articles.", "We describe a method for the automatic acquisition of the hyponymy lexical relation from unrestricted text. Two goals motivate the approach: (i) avoidance of the need for pre-encoded knowledge and (ii) applicability across a wide range of text. We identify a set of lexico-syntactic patterns that are easily recognizable, that occur frequently and across text genre boundaries, and that indisputably indicate the lexical relation of interest. We describe a method for discovering these patterns and suggest that other lexical relations will also be acquirable in this way. A subset of the acquisition algorithm is implemented and the results are used to augment and critique the structure of a large hand-built thesaurus. Extensions and applications to areas such as information retrieval are suggested.", "We present a novel approach to weakly supervised semantic class learning from the web, using a single powerful hyponym pattern combined with graph structures, which capture two properties associated with pattern-based extractions: popularity and productivity. Intuitively, a candidate is popular if it was discovered many times by other instances in the hyponym pattern. A candidate is productive if it frequently leads to the discovery of other instances. Together, these two measures capture not only frequency of occurrence, but also cross-checking that the candidate occurs both near the class name and near other class members. We developed two algorithms that begin with just a class name and one seed instance and then automatically generate a ranked list of new class instances. We conducted experiments on four semantic classes and consistently achieved high accuracies.", "We present a novel approach to the automatic acquisition of taxonomies or concept hierarchies from a text corpus. The approach is based on Formal Concept Analysis (FCA), a method mainly used for the analysis of data, i.e. for investigating and processing explicitly given information. We follow Harris' distributional hypothesis and model the context of a certain term as a vector representing syntactic dependencies which are automatically acquired from the text corpus with a linguistic parser. On the basis of this context information, FCA produces a lattice that we convert into a special kind of partial order constituting a concept hierarchy. The approach is evaluated by comparing the resulting concept hierarchies with hand-crafted taxonomies for two domains: tourism and finance. We also directly compare our approach with hierarchical agglomerative clustering as well as with Bi-Section-KMeans as an instance of a divisive clustering algorithm. Furthermore, we investigate the impact of using different measures weighting the contribution of each attribute as well as of applying a particular smoothing technique to cope with data sparseness." ] }
0805.3747
2949310701
Many social Web sites allow users to publish content and annotate with descriptive metadata. In addition to flat tags, some social Web sites have recently began to allow users to organize their content and metadata hierarchically. The social photosharing site Flickr, for example, allows users to group related photos in sets, and related sets in collections. The social bookmarking site Del.icio.us similarly lets users group related tags into bundles. Although the sites themselves don't impose any constraints on how these hierarchies are used, individuals generally use them to capture relationships between concepts, most commonly the broader narrower relations. Collective annotation of content with hierarchical relations may lead to an emergent classification system, called a folksonomy. While some researchers have explored using tags as evidence for learning folksonomies, we believe that hierarchical relations described above offer a high-quality source of evidence for this task. We propose a simple approach to aggregate shallow hierarchies created by many distinct Flickr users into a common folksonomy. Our approach uses statistics to determine if a particular relation should be retained or discarded. The relations are then woven together into larger hierarchies. Although we have not carried out a detailed quantitative evaluation of the approach, it looks very promising since it generates very reasonable, non-trivial hierarchies.
Recently, several papers proposed different approaches to construct conceptual hierarchies from tags collated from social Web sites. Mika @cite_12 uses a graph-based approach to construct a network of related tags, projected from either a user-tag or object-tag association graphs. Although there is no evaluation of the induced broader narrower relations, the work provides a good suggestion to infer them by using betweenness centrality and set theory. Other works apply clustering techniques to keywords expressed in tags, and use their co-occurrence statistics to produce conceptual hierarchies @cite_9 @cite_3 . In a variation of the clustering approach, Heymann @cite_0 uses graph centrality in the similarity graph of tags. In particular, the tag with the highest centrality would be more abstract than that with a lower centrality; thus it should be merged to the hierarchy before the latter, to guarantee that more general node gets closer to the root node. Schmitz @cite_14 has applied a statistical subsumption model @cite_2 to induce hierarchical relations of tags.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_3", "@cite_0", "@cite_2", "@cite_12" ], "mid": [ "", "2120721759", "1532323755", "2161173831", "2034953016", "" ], "abstract": [ "", "Tags have recently become popular as a means of annotating and organizing Web pages and blog entries. Advocates of tagging argue that the use of tags produces a 'folksonomy', a system in which the meaning of a tag is determined by its use among the community as a whole. We analyze the effectiveness of tags for classifying blog entries by gathering the top 350 tags from Technorati and measuring the similarity of all articles that share a tag. We find that tags are useful for grouping articles into broad categories, but less effective in indicating the particular content of an article. We then show that automatically extracting words deemed to be highly relevant can produce a more focused categorization of articles. We also show that clustering algorithms can be used to reconstruct a topical hierarchy among tags, and suggest that these approaches may be used to address some of the weaknesses in current tagging systems.", "This paper deals with the problem of exploring hierarchical semantics from social annotations. Recently, social annotation services have become more and more popular in Semantic Web. It allows users to arbitrarily annotate web resources, thus, largely lowers the barrier to cooperation. Furthermore, through providing abundant meta-data resources, social annotation might become a key to the development of Semantic Web. However, on the other hand, social annotation has its own apparent limitations, for instance, 1) ambiguity and synonym phenomena and 2) lack of hierarchical information. In this paper, we propose an unsupervised model to automatically derive hierarchical semantics from social annotations. Using a social bookmark service Del.icio.us as example, we demonstrate that the derived hierarchical semantics has the ability to compensate those shortcomings. We further apply our model on another data set from Flickr to testify our model's applicability on different environments. The experimental results demonstrate our model's efficiency.", "Collaborative tagging systems---systems where many casual users annotate objects with free-form strings (tags) of their choosing---have recently emerged as a powerful way to label and organize large collections of data. During our recent investigation into these types of systems, we discovered a simple but remarkably effective algorithm for converting a large corpus of tags annotating objects in a tagging system into a navigable hierarchical taxonomy of tags. We first discuss the algorithm and then present a preliminary model to explain why it is so effective in these types of systems.", "Abstract : This paper presents a means of automatically deriving a hierarchical organization of concepts from a set of documents without use of training data or standard clustering techniques. Instead, salient words and phrases extracted from the documents are organized hierarchically using a type of co-occurrence known as subsumption. The resulting structure is displayed as a series of hierarchical menus. When generated from a set of retrieved documents, a user browsing the menus is provided with a detailed overview of their content in a manner distinct from existing overview and summarization techniques. The methods used to build the structure are simple, but appear to be effective: a smallscale user study reveals that the generated hierarchy possesses properties expected of such a structure in that general terms are placed at the top levels leading to related and more specific terms below. The formation and presentation of the hierarchy is described along with the user study and some other informal evaluations. The organization of a set of documents into a concept hierarchy derived automatically from the set itself is undoubtedly one goal of information retrieval. Were this goal to be achieved, the documents would be organized into a form somewhat like existing manually constructed subject hierarchies, such as the Library of Congress categories, or the Dewey Decimal system. The only difference being that the categories would be customized to the set of documents itself. For example, from a collection of media related articles, the category \"Entertainment\" might appear near the top level; below it, (amongst others) one might find the category \"Movies\", a type of entertainment; and below that, there could be the category \"Actors & Actresses\", an aspect of movies. As can be seen, the arrangement of the categories provides an overview of the topic structure of those articles.", "" ] }
0805.3747
2949310701
Many social Web sites allow users to publish content and annotate with descriptive metadata. In addition to flat tags, some social Web sites have recently began to allow users to organize their content and metadata hierarchically. The social photosharing site Flickr, for example, allows users to group related photos in sets, and related sets in collections. The social bookmarking site Del.icio.us similarly lets users group related tags into bundles. Although the sites themselves don't impose any constraints on how these hierarchies are used, individuals generally use them to capture relationships between concepts, most commonly the broader narrower relations. Collective annotation of content with hierarchical relations may lead to an emergent classification system, called a folksonomy. While some researchers have explored using tags as evidence for learning folksonomies, we believe that hierarchical relations described above offer a high-quality source of evidence for this task. We propose a simple approach to aggregate shallow hierarchies created by many distinct Flickr users into a common folksonomy. Our approach uses statistics to determine if a particular relation should be retained or discarded. The relations are then woven together into larger hierarchies. Although we have not carried out a detailed quantitative evaluation of the approach, it looks very promising since it generates very reasonable, non-trivial hierarchies.
There is another line of research that focuses on exploiting partial hierarchies contributed by users. project @cite_6 collects bookmarks donated by users. Each bookmark is organized in a tree structure as folder and sub folders by an individual user. Based on tree structures, similarities between URLs are computed and used for URL recommendation and ranking. Although this project does not concentrate on conceptual hierarchy construction, it provides a good motivation to exploit explicit partial structures like folder and subfolder relations. Our approach is in the same spirit as --- we exploit collection and set relations contributed by users on a social Web site to construct conceptual hierarchies. We hypothesize that generality-popularity problem of keywords in collection-set relation space is less than that in tag space. Although people may use a keyword Washington'' far more than United States'' to name their collections and sets, not so many people would put their United States'' album into Washington'' super album, however.
{ "cite_N": [ "@cite_6" ], "mid": [ "1595400570" ], "abstract": [ "GiveALink.org is a social bookmarking site where users may donate and view their personal bookmark files online securely. The bookmarks are analyzed to build a new generation of intelligent information retrieval techniques to recommend, search, and personalize the Web. GiveALink does not use tags, content, or links in the submitted Web pages. Instead we present a semantic similarity measure for URLs that takes advantage both of the hierarchical structure in the bookmark files of individual users, and of collaborative filtering across users. In addition, we build a recommendation and search engine from ranking algorithms based on popularity and novelty measures extracted from the similarity-induced network. Search results can be personalized using the bookmarks submitted by a user. We evaluate a subset of the proposed ranking measures by conducting a study with human subjects." ] }
0805.3747
2949310701
Many social Web sites allow users to publish content and annotate with descriptive metadata. In addition to flat tags, some social Web sites have recently began to allow users to organize their content and metadata hierarchically. The social photosharing site Flickr, for example, allows users to group related photos in sets, and related sets in collections. The social bookmarking site Del.icio.us similarly lets users group related tags into bundles. Although the sites themselves don't impose any constraints on how these hierarchies are used, individuals generally use them to capture relationships between concepts, most commonly the broader narrower relations. Collective annotation of content with hierarchical relations may lead to an emergent classification system, called a folksonomy. While some researchers have explored using tags as evidence for learning folksonomies, we believe that hierarchical relations described above offer a high-quality source of evidence for this task. We propose a simple approach to aggregate shallow hierarchies created by many distinct Flickr users into a common folksonomy. Our approach uses statistics to determine if a particular relation should be retained or discarded. The relations are then woven together into larger hierarchies. Although we have not carried out a detailed quantitative evaluation of the approach, it looks very promising since it generates very reasonable, non-trivial hierarchies.
Our approach is similar in spirit to ontology alignment, , @cite_13 . However, unlike those works, which merge a small number of deep and detailed hierarchies, we merge large number of noisy, shallow hierarchies.
{ "cite_N": [ "@cite_13" ], "mid": [ "2125149214" ], "abstract": [ "There is a great deal of research on ontology integration which makes use of rich logical constraints to reason about the structural and logical alignment of ontologies. There is also considerable work on matching data instances from heterogeneous schema or ontologies. However, little work exploits the fact that ontologies include both data and structure. We aim to close this gap by presenting a new algorithm (ILIADS) that tightly integrates both data matching and logical reasoning to achieve better matching of ontologies. We evaluate our algorithm on a set of 30 pairs of OWL Lite ontologies with the schema and data matchings found by human reviewers. We compare against two systems - the ontology matching tool FCA-merge [28] and the schema matching tool COMA++ [1]. ILIADS shows an average improvement of 25 in quality over FCA-merge and a 11 improvement in recall over COMA++." ] }
0805.4680
2127238153
The Telex system is designed for sharing mutable data in a distributed environment, particularly for collaborative applications. Users operate on their local, persistent replica of shared documents; they can work disconnected and suffer no network latency. The Telex approach to detect and correct conflicts is application independent, based on an action-constraint graph (ACG) that summarises the concurrency semantics of applications. The ACG is stored efficiently in a multilog structure that eliminates contention and is optimised for locality. Telex supports multiple applications and multi-document updates. The Telex system clearly separates system logic (which includes replication, views, undo, security, consistency, conflicts, and commitment) from application logic. An example application is a shared calendar for managing multi-user meetings; the system detects meeting conflicts and resolves them consistently.
State-machine replication @cite_2 is based a total order of operations. This ensures consistency and correctness, but requires consensus at each operation, in the critical path of the application. In contrast, Telex's optimistic approach performs consensus in batches, in the background.
{ "cite_N": [ "@cite_2" ], "mid": [ "1973501242" ], "abstract": [ "The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become." ] }
0805.4680
2127238153
The Telex system is designed for sharing mutable data in a distributed environment, particularly for collaborative applications. Users operate on their local, persistent replica of shared documents; they can work disconnected and suffer no network latency. The Telex approach to detect and correct conflicts is application independent, based on an action-constraint graph (ACG) that summarises the concurrency semantics of applications. The ACG is stored efficiently in a multilog structure that eliminates contention and is optimised for locality. Telex supports multiple applications and multi-document updates. The Telex system clearly separates system logic (which includes replication, views, undo, security, consistency, conflicts, and commitment) from application logic. An example application is a shared calendar for managing multi-user meetings; the system detects meeting conflicts and resolves them consistently.
The literature on computer-supported co-operative work is widely based on operational transformation (OT) @cite_12 . OT ensuring commutativity between concurrent operations by modifying them at replay time. Combined with reliable causal-order broadcast, this ensures convergence with no further concurrency control, but unfortunately OT appears limited to very simple text-editing scenarios. Telex takes advantage of commutativity when it is available, and supports any mix of commutative and non-commutative operations.
{ "cite_N": [ "@cite_12" ], "mid": [ "2151943351" ], "abstract": [ "Real-time cooperative editing systems allow multiple users to view and edit the same text graphic image multimedia document at the same time for multiple sites connected by communication networks. Consistency maintenance is one of the most significant challenges in designing and implementing real-time cooperative editing systems. In this article, a consistency model, with properties of convergence, causality preservation, and intention preservation, is proposed as a framework for consistency maintenance in real-time cooperative editing systems. Moreover, an integrated set of schemes and algorithms, which support the proposed consistency model, are devised and discussed in detail. In particular, we have contributed (1) a novel generic operation transformation control algorithm for achieving intention preservation in combination with schemes for achieving convergence and causality preservation and (2) a pair of reversible inclusion and exclusion transformation algorithms for stringwise operations for text editing. An Internet-based prototype system has been built to test the feasibility of the proposed schemes and algorithms" ] }
0805.4680
2127238153
The Telex system is designed for sharing mutable data in a distributed environment, particularly for collaborative applications. Users operate on their local, persistent replica of shared documents; they can work disconnected and suffer no network latency. The Telex approach to detect and correct conflicts is application independent, based on an action-constraint graph (ACG) that summarises the concurrency semantics of applications. The ACG is stored efficiently in a multilog structure that eliminates contention and is optimised for locality. Telex supports multiple applications and multi-document updates. The Telex system clearly separates system logic (which includes replication, views, undo, security, consistency, conflicts, and commitment) from application logic. An example application is a shared calendar for managing multi-user meetings; the system detects meeting conflicts and resolves them consistently.
Constraints were used for reconciliation in the IceCube @cite_7 system. IceCube relies on a primary site for commitment. In Telex, each site runs an IceCube engine (or any alternative) to propose schedules, and the commitment protocol ensures consensus based on these proposals. IceCube supports a richer set of constraints and can extract them from the applications' source code @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_7" ], "mid": [ "162970551", "2118871086" ], "abstract": [ "Optimistic replication lets multiple users update local replicas of shared data independently. These replicas may diverge and must be reconciled. In this paper, we present a general-purpose reconciliation system for mobile transactions. The basic reconciliation engine treats reconciliation as an optimization problem. To direct the search, it relies on semantic information and user intents expressed as relations among mobile transactions. Unlike previous semantics-based reconciliation systems, our system includes a module that automatically infers semantic relations from the code of mobile transactions. Thus, it is possible to use semantics-based reconciliation without incurring the overhead of specifying the semantics of the data types or operations.", "IceCube is a system for optimistic replication, supporting collaborative work and mobile computing. It lets users write to shared data with no mutual synchronisation; however replicas diverge and must be reconciled. IceCube is a general-purpose reconciliation engine, parameterised by “constraints” capturing data semantics and user intents. IceCube combines logs of disconnected actions into near-optimal reconciliation schedules that honour the constraints. IceCube features a simple, high-level, systematic API . It seamlessly integrates diverse applications, sharing various data, and run by concurrent users. This paper focus on the IceCube API and algorithms. Application experience indicates that IceCube simplifies application design, supports a wide variety of application semantics, and seamlessly integrates diverse applications. On a realistic benchmark, IceCube runs at reasonable speeds and scales to large input sets." ] }
0805.4680
2127238153
The Telex system is designed for sharing mutable data in a distributed environment, particularly for collaborative applications. Users operate on their local, persistent replica of shared documents; they can work disconnected and suffer no network latency. The Telex approach to detect and correct conflicts is application independent, based on an action-constraint graph (ACG) that summarises the concurrency semantics of applications. The ACG is stored efficiently in a multilog structure that eliminates contention and is optimised for locality. Telex supports multiple applications and multi-document updates. The Telex system clearly separates system logic (which includes replication, views, undo, security, consistency, conflicts, and commitment) from application logic. An example application is a shared calendar for managing multi-user meetings; the system detects meeting conflicts and resolves them consistently.
The Ivy peer-to-peer file system @cite_16 reconciles the current state of a file from single-writer, append-only logs. There are several differences between Ivy and Telex. Ivy is designed for connected operation. Ivy is state-based and reconciles using a per-byte LWW algorithm by default. Whereas Telex localises logs per document, in Ivy there is a single global log for all the updates of a given participant. Reading any file requires scanning all the logs in the system, which does not scale well, although this is offset somewhat by caching. Ivy has no commitment protocol, therefore a state may remain tentative indefinitely.
{ "cite_N": [ "@cite_16" ], "mid": [ "2071958655" ], "abstract": [ "Ivy is a multi-user read write peer-to-peer file system. Ivy has no centralized or dedicated components, and it provides useful integrity properties without requiring users to fully trust either the underlying peer-to-peer storage system or the other users of the file system.An Ivy file system consists solely of a set of logs, one log per participant. Ivy stores its logs in the DHash distributed hash table. Each participant finds data by consuiting all logs, but performs modifications by appending only to its own log. This arrangement allows Ivy to maintain meta-data consistency without locking. Ivy users can choose which other logs to trust, an appropriate arrangement in a semi-open peer-to-peer system.Ivy presents applications with a conventional file system interface. When the underlying network is fully connected, Ivy provides NFS-like semantics, such as close-to-open consistency. Ivy detects conflicting modifications made during a partition, and provides relevant version information to application-specific conflict resolvers. Performance measurements on a wide-area network show that Ivy is two to three times slower than NFS." ] }
0805.4022
2949941918
This paper presents a numerical compression strategy for the boundary integral equation of acoustic scattering in two dimensions. These equations have oscillatory kernels that we represent in a basis of wave atoms, and compress by thresholding the small coefficients to zero. This phenomenon was perhaps first observed in 1993 by Bradie, Coifman, and Grossman, in the context of local Fourier bases BCG . Their results have since then been extended in various ways. The purpose of this paper is to bridge a theoretical gap and prove that a well-chosen fixed expansion, the nonstandard wave atom form, provides a compression of the acoustic single and double layer potentials with wave number @math as @math -by- @math matrices with @math nonnegligible entries, with a constant that depends on the relative @math accuracy @math in an acceptable way. The argument assumes smooth, separated, and not necessarily convex scatterers in two dimensions. The essential features of wave atoms that enable to write this result as a theorem is a sharp time-frequency localization that wavelet packets do not obey, and a parabolic scaling wavelength @math (essential diameter) @math . Numerical experiments support the estimate and show that this wave atom representation may be of interest for applications where the same scattering problem needs to be solved for many boundary conditions, for example, the computation of radar cross sections.
There has been a lot of work on sparsifying the integral operator of , or some variants of it, in appropriate bases. In @cite_3 , showed that the operator becomes sparse in a local cosine basis. They proved that the number of coefficients with absolute value greater than any fixed @math is bounded by @math when the constant depends on @math . Notice that our result in Theorem is stronger as the @math norm is used instead in . In @cite_7 , extended the work in @cite_3 by performing best basis search in the class of adaptive hierarchical local cosine bases.
{ "cite_N": [ "@cite_7", "@cite_3" ], "mid": [ "1969569206", "1993700748" ], "abstract": [ "Abstract The integral ∫0Leiνφ(s,t)f(s)dswith a highly oscillatory kernel (large ν, ν is up to 2000) is considered. This integral is accurately evaluated with an improved trapezoidal rule and effectively transcribed using local Fourier basis and adaptive multiscale local Fourier basis. The representation of the oscillatory kernel in these bases is sparse. The coefficients after the application of local Fourier transform are smoothed. Sometimes this enables us to obtain further compression with wavelets.", "Abstract We prove that certain oscillatory boundary integral operators occurring in acoustic scattering computations become sparse when represented in the appropriate local cosine transform orthonormal basis." ] }
0805.4022
2949941918
This paper presents a numerical compression strategy for the boundary integral equation of acoustic scattering in two dimensions. These equations have oscillatory kernels that we represent in a basis of wave atoms, and compress by thresholding the small coefficients to zero. This phenomenon was perhaps first observed in 1993 by Bradie, Coifman, and Grossman, in the context of local Fourier bases BCG . Their results have since then been extended in various ways. The purpose of this paper is to bridge a theoretical gap and prove that a well-chosen fixed expansion, the nonstandard wave atom form, provides a compression of the acoustic single and double layer potentials with wave number @math as @math -by- @math matrices with @math nonnegligible entries, with a constant that depends on the relative @math accuracy @math in an acceptable way. The argument assumes smooth, separated, and not necessarily convex scatterers in two dimensions. The essential features of wave atoms that enable to write this result as a theorem is a sharp time-frequency localization that wavelet packets do not obey, and a parabolic scaling wavelength @math (essential diameter) @math . Numerical experiments support the estimate and show that this wave atom representation may be of interest for applications where the same scattering problem needs to be solved for many boundary conditions, for example, the computation of radar cross sections.
Besides the local cosine transform, adaptive wavelet packets have been used to sparsify the integral operator as well. Deng and Ling @cite_6 applied the best basis algorithm to the integral operator to choose the right one dimensional wavelet packet basis. Golik @cite_4 independently proposed to apply the best basis algorithm on the right hand side of the integral equation . Shortly afterwards, Deng and Ling @cite_14 gave similar results by using a predefined wavelet packet basis that refines the frequency domain near @math . All of these approaches work with the standard form expansion of the integral operator. Recently in @cite_17 , Huybrechs and Vandewalle used the best basis algorithm for two dimensional wavelet packets to construct a nonstandard sparse expansion of the integral operator. In all of these results, the numbers of nonnegligible coefficients in the expansions were reported to scale like @math . However, our result shows that, by using the nonstandard form based on wave atoms, the number of significant coefficients scales like @math .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_6", "@cite_17" ], "mid": [ "", "2105489741", "2117738155", "2096240124" ], "abstract": [ "", "This paper considers the problem of wavelet sparsification of matrices arising in the numerical solution of electromagnetic integral equations by the method of moments. Scattering of plane waves from two-dimensional (2-D) cylinders is computed numerically using a constant number of test functions per wavelength. Discrete wavelet packet (DWP) similarity transformations and thresholding are applied to system matrices to obtain sparsity. If thresholds are selected to keep the relative residual error constant the matrix sparsity is of order O(N sup P ) with p<2. This stands in contrast with O(N sup 2 ) sparsities obtained with standard wavelet transformations. Numerical tests also show that the DWP method yields faster matrix-vector multiplication than some fast multipole algorithms.", "The adaptive wavelet packet transform is applied to sparsify the moment matrices for the fast solution of electromagnetic integral equations. In the algorithm, a cost function is employed to adaptively select the optimal wavelet packet expansion testing functions to achieve the maximum sparsity possible in the resulting transformed system. The search for the best wavelet packet basis and the moment matrix transformation are implemented by repeated two-channel filtering of the original moment matrix with a pair of quadrature filters. It is found that the sparsified matrix has above-threshold elements that grow only as O(N sup 1.4 ) for typical scatterers. Consequently the operations to solve the transformed moment equation using the conjugate gradient method scales as O(N sup 1.4 ). The additional computational cost for carrying out the adaptive wavelet packet transform is evaluated and discussed.", "We examine the use of wavelet packets for the fast solution of integral equations with a highly oscillatory kernel. The redundancy of the wavelet packet transform allows the selection of a basis tailored to the problem at hand. It is shown that a well chosen wavelet packet basis is better suited to compress the discretized system than wavelets. The complexity of the matrix-vector product in an iterative solution method is then substantially reduced. A two-dimensional wavelet packet transform is derived and compared with a number of one-dimensional transforms that were presented earlier in literature. By means of some numerical experiments we illustrate the improved efficiency of the two-dimensional approach." ] }
0805.4022
2949941918
This paper presents a numerical compression strategy for the boundary integral equation of acoustic scattering in two dimensions. These equations have oscillatory kernels that we represent in a basis of wave atoms, and compress by thresholding the small coefficients to zero. This phenomenon was perhaps first observed in 1993 by Bradie, Coifman, and Grossman, in the context of local Fourier bases BCG . Their results have since then been extended in various ways. The purpose of this paper is to bridge a theoretical gap and prove that a well-chosen fixed expansion, the nonstandard wave atom form, provides a compression of the acoustic single and double layer potentials with wave number @math as @math -by- @math matrices with @math nonnegligible entries, with a constant that depends on the relative @math accuracy @math in an acceptable way. The argument assumes smooth, separated, and not necessarily convex scatterers in two dimensions. The essential features of wave atoms that enable to write this result as a theorem is a sharp time-frequency localization that wavelet packets do not obey, and a parabolic scaling wavelength @math (essential diameter) @math . Numerical experiments support the estimate and show that this wave atom representation may be of interest for applications where the same scattering problem needs to be solved for many boundary conditions, for example, the computation of radar cross sections.
Most of the approaches on sparsifying in well-chosen bases require the construction of the full integral operator. Since this step itself takes @math operations, it poses computational difficulty for large @math values. In @cite_19 , proposed a solution to the related problem of sparsifying the boundary integral operator of the Laplace equation. They successfully avoided the construction of the full integral operator by predicting the location of the large coefficients and applying a special one-point quadrature rule to compute the coefficients. The corresponding solution for the integral operator of the Helmholtz equation is still missing.
{ "cite_N": [ "@cite_19" ], "mid": [ "2094585768" ], "abstract": [ "A class of algorithms is introduced for the rapid numerical application of a class of linear operators to arbitrary vectors. Previously published schemes of this type utilize detailed analytical information about the operators being applied and are specific to extremely narrow classes of matrices. In contrast, the methods presented here are based on the recently developed theory of wavelets and are applicable to all Calderon-Zygmund and pseudo-differential operators. The algorithms of this paper require order O(N) or O(N log N) operations to apply an N × N matrix to a vector (depending on the particular operator and the version of the algorithm being used), and our numerical experiments indicate that many previously intractable problems become manageable with the techniques presented here." ] }
0805.4022
2949941918
This paper presents a numerical compression strategy for the boundary integral equation of acoustic scattering in two dimensions. These equations have oscillatory kernels that we represent in a basis of wave atoms, and compress by thresholding the small coefficients to zero. This phenomenon was perhaps first observed in 1993 by Bradie, Coifman, and Grossman, in the context of local Fourier bases BCG . Their results have since then been extended in various ways. The purpose of this paper is to bridge a theoretical gap and prove that a well-chosen fixed expansion, the nonstandard wave atom form, provides a compression of the acoustic single and double layer potentials with wave number @math as @math -by- @math matrices with @math nonnegligible entries, with a constant that depends on the relative @math accuracy @math in an acceptable way. The argument assumes smooth, separated, and not necessarily convex scatterers in two dimensions. The essential features of wave atoms that enable to write this result as a theorem is a sharp time-frequency localization that wavelet packets do not obey, and a parabolic scaling wavelength @math (essential diameter) @math . Numerical experiments support the estimate and show that this wave atom representation may be of interest for applications where the same scattering problem needs to be solved for many boundary conditions, for example, the computation of radar cross sections.
There has been a different class of methods, initiated by Rokhlin in @cite_5 @cite_1 , that requires no construction of the integral operator and takes @math operations in 2D to apply the integral operator. A common feature of these methods @cite_21 @cite_9 @cite_16 @cite_5 @cite_1 is that they partition the spatial domain hierarchically with a tree structure and compute the interaction between the tree nodes in a multiscale fashion: Whenever two nodes of the tree are well-separated, the interaction (of the integral operator) between them is either accelerated either by Fourier transform-type techniques @cite_21 @cite_5 @cite_1 or by directional low rank representations @cite_9 @cite_16 .
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_1", "@cite_5", "@cite_16" ], "mid": [ "", "1963750172", "2089210221", "1987397719", "2117762537" ], "abstract": [ "", "We describe a wideband version of the Fast Multipole Method for the Helmholtz equation in three dimensions. It unifies previously existing versions of the FMM for high and low frequencies into an algorithm which is accurate and efficient for any frequency, having a CPU time of O(N) if low-frequency computations dominate, or O(NlogN) if high-frequency computations dominate. The performance of the algorithm is illustrated with numerical examples.", "Abstract The present paper describes an algorithm for rapid solution of boundary value problems for the Helmholtz equation in two dimensions based on iteratively solving integral equations of scattering theory. CPU time requirements of previously published algorithms of this type are of the order n 2 , where n is the number of nodes in the discretization of the boundary of the scatterer. The CPU time requirements of the algorithm of the present paper are n 4 3 , and can be further reduced, making it considerably more practical for large scale problems.", "Abstract The diagonal forms are constructed for the translation operators for the Helmholtz equation in three dimensions. While the operators themselves have a fairly complicated structure (described somewhat incompletely by the classical addition theorems for the Bessel functions), their diagonal forms turn out to be quite simple. These diagonal forms are realized as generalized integrals, possess straightforward physical interpretations, and admit stable numerical implementation. This paper uses the obtained analytical apparatus to construct an algorithm for the rapid application to arbitrary vectors of matrices resulting from the discretization of integral equations of the potential theory for the Helmholtz equation in three dimensions. It is an extension to the three-dimensional case of the results of Rokhlin (J. Complexity4(1988), 12-32), where a similar apparatus is developed in the two-dimensional case.", "This paper introduces a new directional multilevel algorithm for solving @math -body or @math -point problems with highly oscillatory kernels. These systems often result from the boundary integral formulations of scattering problems and are difficult due to the oscillatory nature of the kernel and the non-uniformity of the particle distribution. We address the problem by first proving that the interaction between a ball of radius @math and a well-separated region has an approximate low rank representation, as long as the well-separated region belongs to a cone with a spanning angle of @math and is at a distance which is at least @math away from from the ball. We then propose an efficient and accurate procedure which utilizes random sampling to generate such a separated, low rank representation. Based on the resulting representations, our new algorithm organizes the high frequency far field computation by a multidirectional and multiscale strategy to achieve maximum efficiency. The algorithm performs well on a large group of highly oscillatory kernels. Our algorithm is proved to have @math computational complexity for any given accuracy when the points are sampled from a two dimensional surface. We also provide numerical results to demonstrate these properties." ] }
0805.4022
2949941918
This paper presents a numerical compression strategy for the boundary integral equation of acoustic scattering in two dimensions. These equations have oscillatory kernels that we represent in a basis of wave atoms, and compress by thresholding the small coefficients to zero. This phenomenon was perhaps first observed in 1993 by Bradie, Coifman, and Grossman, in the context of local Fourier bases BCG . Their results have since then been extended in various ways. The purpose of this paper is to bridge a theoretical gap and prove that a well-chosen fixed expansion, the nonstandard wave atom form, provides a compression of the acoustic single and double layer potentials with wave number @math as @math -by- @math matrices with @math nonnegligible entries, with a constant that depends on the relative @math accuracy @math in an acceptable way. The argument assumes smooth, separated, and not necessarily convex scatterers in two dimensions. The essential features of wave atoms that enable to write this result as a theorem is a sharp time-frequency localization that wavelet packets do not obey, and a parabolic scaling wavelength @math (essential diameter) @math . Numerical experiments support the estimate and show that this wave atom representation may be of interest for applications where the same scattering problem needs to be solved for many boundary conditions, for example, the computation of radar cross sections.
A criticism of the methods in @cite_21 @cite_9 @cite_16 @cite_5 @cite_1 is that the constant in front of the complexity @math is often quite high. On the other hand, since the FFT-based wave atom transforms are extremely efficient, applying the operator in the wave atom frame has a very small constant once the nonstandard sparse representation is constructed. Therefore, for applications where one needs to solve the same Helmholtz equation with many different right hand sides, the current approach based on the wave atom basis can offer an competitive alternative. As mentioned earlier, one important example is the computation of the radar cross section.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_1", "@cite_5", "@cite_16" ], "mid": [ "", "1963750172", "2089210221", "1987397719", "2117762537" ], "abstract": [ "", "We describe a wideband version of the Fast Multipole Method for the Helmholtz equation in three dimensions. It unifies previously existing versions of the FMM for high and low frequencies into an algorithm which is accurate and efficient for any frequency, having a CPU time of O(N) if low-frequency computations dominate, or O(NlogN) if high-frequency computations dominate. The performance of the algorithm is illustrated with numerical examples.", "Abstract The present paper describes an algorithm for rapid solution of boundary value problems for the Helmholtz equation in two dimensions based on iteratively solving integral equations of scattering theory. CPU time requirements of previously published algorithms of this type are of the order n 2 , where n is the number of nodes in the discretization of the boundary of the scatterer. The CPU time requirements of the algorithm of the present paper are n 4 3 , and can be further reduced, making it considerably more practical for large scale problems.", "Abstract The diagonal forms are constructed for the translation operators for the Helmholtz equation in three dimensions. While the operators themselves have a fairly complicated structure (described somewhat incompletely by the classical addition theorems for the Bessel functions), their diagonal forms turn out to be quite simple. These diagonal forms are realized as generalized integrals, possess straightforward physical interpretations, and admit stable numerical implementation. This paper uses the obtained analytical apparatus to construct an algorithm for the rapid application to arbitrary vectors of matrices resulting from the discretization of integral equations of the potential theory for the Helmholtz equation in three dimensions. It is an extension to the three-dimensional case of the results of Rokhlin (J. Complexity4(1988), 12-32), where a similar apparatus is developed in the two-dimensional case.", "This paper introduces a new directional multilevel algorithm for solving @math -body or @math -point problems with highly oscillatory kernels. These systems often result from the boundary integral formulations of scattering problems and are difficult due to the oscillatory nature of the kernel and the non-uniformity of the particle distribution. We address the problem by first proving that the interaction between a ball of radius @math and a well-separated region has an approximate low rank representation, as long as the well-separated region belongs to a cone with a spanning angle of @math and is at a distance which is at least @math away from from the ball. We then propose an efficient and accurate procedure which utilizes random sampling to generate such a separated, low rank representation. Based on the resulting representations, our new algorithm organizes the high frequency far field computation by a multidirectional and multiscale strategy to achieve maximum efficiency. The algorithm performs well on a large group of highly oscillatory kernels. Our algorithm is proved to have @math computational complexity for any given accuracy when the points are sampled from a two dimensional surface. We also provide numerical results to demonstrate these properties." ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
The proposed protocols for tag collision resolution in RFID systems are either probabilistic or deterministic. The first ones are protocols @cite_9 @cite_16 @cite_17 @cite_15 @cite_19 @cite_5 @cite_26 @cite_1 @cite_24 , the last ones are protocols @cite_6 @cite_23 @cite_14 @cite_8 @cite_18 @cite_29 @cite_2 . There are also hybrid approaches, where randomization is applied in tree schemes @cite_22 @cite_30 @cite_0 .
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_29", "@cite_2", "@cite_5", "@cite_15", "@cite_18", "@cite_8", "@cite_23", "@cite_17", "@cite_26", "@cite_6", "@cite_19", "@cite_16", "@cite_14", "@cite_9", "@cite_1", "@cite_24", "@cite_0" ], "mid": [ "2164690908", "2048428746", "2150286589", "2150009405", "2160515777", "2112304434", "2163278169", "2096667520", "", "2111620924", "", "1977607856", "", "2150847784", "2125504058", "2133365802", "2114005194", "2108365714", "2156745475" ], "abstract": [ "The tree search (or splitting) method introduced by Capetanakis (1979) for use in conventional multiaccess systems can also be applied to radiofrequency identification systems arbitration. This paper performs a transient analysis of this, and related methods.", "Purpose – Radio frequency identification (RFID) is a technology for tracking objects that is expected to be widely adopted in very near future. A reader device sends probes to a set of RFID tags, which then respond to the request. A tag is recognized only when it is the only one to respond to the probe. Only reader has collision detection capability. The problem considered here is to minimize the number of probes necessary for reading all the tags, assuming that the number of tags is known in advance.Design methodology approach – Well known binary and n‐ary partitioning algorithms can be applied to solve the problem for the case of known number of tags. A new randomized hybrid tag identification protocol has been proposed, which combines the two partitioning algorithms into a more efficient one. The new scheme optimizes the binary partition protocol for small values of n (e.g. n=2, 3, 4). The hybrid scheme then applies n‐ary partition protocol on the whole set, followed by binary partition on the tags tha...", "In this paper we present a new tree search-based protocol for the anti-collision problem of RFID systems. This protocol builds a binary search tree according to the prefixes chosen randomly by tags rather than using their ID-based prefixes. Therefore, the tag identification time of the proposed protocol is no longer limited by the tag ID distribution and ID length as the conventional tree search protocol. The time complexity of the protocol is derived and shown that it can identify tags faster than the Query-Tree protocol.", "This paper is intended to present bi-slotted tree based RFID tag anti-collision protocols, bi-slotted query tree algorithm (BSQTA) and bi-slotted collision tracking tree algorithm (BSCTTA). Diminishing prefix overhead and iteration overhead is a significant issue to minimize the anti-collision cost. For fast tag identification, BSQTA and BSCTTA use time divided responses depending on whether the collided bit is 0' or 1' at each tag ID. According to the simulation results, BSQTA and BSCTTA require less time consumption for tag identification than the other tree based RFID tag anti-collision protocols", "The authors present an exact analysis of framed ALOHA for the case of a finite number of users. This analysis, which is based on the use of a novel combinatorial technique, does not require any restrictive assumptions on channel traffic. This model can accommodate a general model for capture, in which the probability that one packet is received successfully depends on the number of packets involved in the collision. Performance results are presented for both uncontrolled and dynamically controlled systems. >", "In this paper, we approach the problem of identifying a set of objects in an RFID network. We propose a modified version of slotted aloha protocol to reduce the number of transmission collisions. All tags select a slot to transmit their ID by generating a random number. If there is a collision in a slot, the reader broadcasts the next identification request only to tags which collided in that slot. Simulation results show that our approach performs better than framed slotted aloha and query tree based protocols, in terms of number of slots needed to identify all tags, which is a commonly used metric, strictly related to delay.", "Tag identification is an important tool in RFID systems with applications for monitoring and tracking. A RFID reader recognizes tags through communication over a shared wireless channel. When multiple tags transmit their IDs simultaneously, the tag-to-reader signals collide and this collision disturbs a reader's identification process. Therefore, tag collision arbitration for passive tags is a significant issue for fast identification. This paper presents two adaptive tag anticollision protocols: an Adaptive Query Splitting protocol (AQS), which is an improvement on the query tree protocol, and an Adaptive Binary Splitting protocol (ABS), which is based on the binary tree protocol and is a de facto standard for RFID anticollision protocols. To reduce collisions and identify tags efficiently, adaptive tag anticollision protocols use information obtained from the last process of tag identification. Our performance evaluation shows that AQS and ABS outperform other tree-based tag anticollision protocols.", "A radio frequency identification (RFID) reader recognizes objects through wireless communications with RFID tags. Tag collision arbitration for passive tags is a significant issue for fast tag identification due to communication over a shared wireless channel. This paper presents an adaptive memoryless protocol, which is an improvement on the query tree protocol. Memoryless means that tags need not have additional memory except ID for identification. To reduce collisions and identify tags promptly, we use information obtained from the last process of tag identification at a reader. Our performance evaluation shows that the adaptive memoryless protocol causes fewer collisions and takes shorter delay for recognizing all tags while preserving lower communication overhead than other tree based tag anticollision protocols", "", "Adding frame structure to slotted ALOHA makes it very convenient to control the ALOHA channel and eliminate instability. The frame length is adjusted dynamically according to the number of garbled, successful, and empty timeslots in the past. Each terminal that has a packet to transmit selects at random one of the n timeslots of a frame. Dynamic frame length ALOHA achieves a throughput (expected number of successful packets per timeslot) of 0.426 which compares favorably with the 1 e ( .368) upper bound of ordinary slotted ALOHA.", "", "This paper presents an efficient collision resolution protocol and its variations for the tag identification problem, where an electromagnetic reader attempts to obtain within is read range the unique ID number of each tag. The novelty of our main protocol is that each tag is memoryless , i.e., the current response of each tag only depends on the current query of the reader but not on the past history of the reader's queries. Moreover, the only computation required for each tag is to match its ID against the binary string in the query. Theoretical resulst in both time and communication complexities are derived to demonstrate the efficiency of our protocols.", "", "In September 1968 the University of Hawaii began work on a research program to investigate the use of radio communications for computer-computer and console-computer links. In this report we describe a remote-access computer system---THE ALOHA SYSTEM---under development as part of that research program and discuss some advantages of radio communications over conventional wire communications for interactive users of a large computer system. Although THE ALOHA SYSTEM research program is composed of a large number of research projects, in this report we shall be concerned primarily with a novel form of random-access radio communications developed for use within THE ALOHA SYSTEM.", "Tag collision arbitration for passive RFID tags is a significant issue for fast tag identification. This letter presents a novel tag anti-collision scheme called adaptive binary splitting (ABS). For reducing collisions, ABS assigns distinct timeslots to tags by using information obtained from the last identification process. Our performance evaluation shows that ABS outperforms other tree based tag anti-collision protocols.", "In RFID system, one of the problems that we must solve is the collision between tags which lowers the efficiency of the RFID system. One of the popular anti-collision algorithms is ALOHA-type algorithms, which are simple and shows good performance when the number of tags to read is small. However, they generally require exponentially increasing number of slots to identify the tags as the number of tag increases. In the paper, we propose a new anti-collision algorithm called enhanced dynamic framed slotted ALOHA (EDFSA) which estimates the number of unread tags first and adjusts the number of responding tags or the frame size to give the optimal system efficiency. As a result, in the proposed method, the number of slots to read the tags increases linearly as the the number of tags does. Simulation results show that the proposed algorithm improves the slot efficiency by 85 spl sim 100 compared to the conventional algorithms when the number of tags is 1000.", "This paper analyses the practical faults existing in enhanced dynamic frame slotted ALOHA (EDFSA) algorithms, and discusses the strategy to improve the efficiency of anti-collision algorithm in RFID system. EDFSA algorithm divides the tags into a number of groups and allows only one group of tags to respond when the number of tags is much larger than optimal efficiency tags. However the efficiency of the system can still be improved. We propose a new anti-collision algorithm called by the name of variant enhanced dynamic frame slotted ALOHA algorithm (VEDFSA) to improve the efficiency of the system that can solve the problem above by dynamic divide tags into groups during the anti-collision solving procedure.", "We propose the ALOHA-based Dynamic Framed Slotted ALOHA algorithm (DFSA) using proposed Tag Estimation Method (TEM) which estimates the number of tags around the reader. We describe the conventional Tag Estimation Method and Dynamic Slot Allocation (DSA), which is the method to dynamically allocate the frame size according to the number of tags. We compare the performance of the proposed DFSA algorithm with the conventional algorithms using simulation. According to the analysis, the proposed DFSA algorithm shows better performance than other conventional algorithms regardless of the number of tags because the proposed algorithm has lower complexity and better delay performance.", "In this paper, we propose a hybrid query tree protocol that combines a tree based query protocol with a slotted backoff mechanism. The proposed protocol decreases the average identification delay by reducing collisions and idle time. To reduce collisions, we use a 4-ary query tree instead of a binary query tree. To reduce idle time, we introduce a slotted backoff mechanism to reduce the number of unnecessary query commands. For static scenarios of tags, we extended the proposed protocol by adopting two phases. First, in leaf query phase for existing tags, the interrogator queries leaf-nodes directly to reuse query strings in the previous session. Second, in root query phase for new arriving tags, the interrogator starts the query process from the roof-node. Simulation reveals that the proposed protocol achieves lower identification delay than existing tag collision arbitration protocols regardless of whether tags are mobile or not." ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
The basic Framed Slotted Aloha anti-collision protocol @cite_17 uses a fixed frame size and does not change the size during the process of tag identification. The Dynamic Framed Slotted Aloha @cite_10 protocol changes the frame size dynamically. The constraint of this protocol is that the frame size cannot be increased indefinitely as the number of tags increases, but it has an upper bound. This implies a very high number of collisions when the number of tags exceeds the maximum admitted frame size. The Enhanced Dynamic Framed Slotted Aloha (EDFSA) protocol, analyzed in @cite_9 , overcomes such problem by dividing the unread tags into a number of groups and interrogating each group of them respectively. The system efficiency of EDFSA protocol is improved by the Variant Enhanced Dynamic Framed Slotted Aloha protocol @cite_1 where a dynamic group dividing approach is adopted.
{ "cite_N": [ "@cite_1", "@cite_9", "@cite_10", "@cite_17" ], "mid": [ "2114005194", "2133365802", "1495958990", "2111620924" ], "abstract": [ "This paper analyses the practical faults existing in enhanced dynamic frame slotted ALOHA (EDFSA) algorithms, and discusses the strategy to improve the efficiency of anti-collision algorithm in RFID system. EDFSA algorithm divides the tags into a number of groups and allows only one group of tags to respond when the number of tags is much larger than optimal efficiency tags. However the efficiency of the system can still be improved. We propose a new anti-collision algorithm called by the name of variant enhanced dynamic frame slotted ALOHA algorithm (VEDFSA) to improve the efficiency of the system that can solve the problem above by dynamic divide tags into groups during the anti-collision solving procedure.", "In RFID system, one of the problems that we must solve is the collision between tags which lowers the efficiency of the RFID system. One of the popular anti-collision algorithms is ALOHA-type algorithms, which are simple and shows good performance when the number of tags to read is small. However, they generally require exponentially increasing number of slots to identify the tags as the number of tag increases. In the paper, we propose a new anti-collision algorithm called enhanced dynamic framed slotted ALOHA (EDFSA) which estimates the number of unread tags first and adjusts the number of responding tags or the frame size to give the optimal system efficiency. As a result, in the proposed method, the number of slots to read the tags increases linearly as the the number of tags does. Simulation results show that the proposed algorithm improves the slot efficiency by 85 spl sim 100 compared to the conventional algorithms when the number of tags is 1000.", "Radio frequency identification systems with passive tags are powerful tools for object identification. However, if multiple tags are to be identified simultaneously, messages from the tags can collide and cancel each other out. Therefore, multiple read cycles have to be performed in order to achieve a high recognition rate. For a typical stochastic anti-collision scheme, we show how to determine the optimal number of read cycles to perform under a given assurance level determining the acceptable rate of missed tags. This yields an efficient procedure for object identification. We also present results on the performance of an implementation.", "Adding frame structure to slotted ALOHA makes it very convenient to control the ALOHA channel and eliminate instability. The frame length is adjusted dynamically according to the number of garbled, successful, and empty timeslots in the past. Each terminal that has a packet to transmit selects at random one of the n timeslots of a frame. Dynamic frame length ALOHA achieves a throughput (expected number of successful packets per timeslot) of 0.426 which compares favorably with the 1 e ( .368) upper bound of ordinary slotted ALOHA." ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
The frame size affects the performance of Aloha based algorithms. A small frame size results in many collisions, and so increases the required total number of slots when the number of tags is high. In contrast, a large frame size may result in more idle time slots, when the number of tags is small. Different methods estimating the number of unread tags, permit to the reader of choosing an optimal frame size for the next read cycle, are presented in @cite_11 @cite_3 @cite_24 .
{ "cite_N": [ "@cite_24", "@cite_3", "@cite_11" ], "mid": [ "2108365714", "", "2107154877" ], "abstract": [ "We propose the ALOHA-based Dynamic Framed Slotted ALOHA algorithm (DFSA) using proposed Tag Estimation Method (TEM) which estimates the number of tags around the reader. We describe the conventional Tag Estimation Method and Dynamic Slot Allocation (DSA), which is the method to dynamically allocate the frame size according to the number of tags. We compare the performance of the proposed DFSA algorithm with the conventional algorithms using simulation. According to the analysis, the proposed DFSA algorithm shows better performance than other conventional algorithms regardless of the number of tags because the proposed algorithm has lower complexity and better delay performance.", "", "We propose two ALOHA-based Dynamic Framed Slotted ALOHA algorithms (DFSA) using tag estimation method (TEM), which estimates the number of tags around the reader, and dynamic slot allocation (DSA), which dynamically allocates the frame size for the number of tags. We compare the performance of the proposed DFSA with the conventional Framed Slotted ALOHA algorithm (FSA) using simulation. According to the analysis, two proposed DFSA algorithms show better performance than FSA algorithm regardless of the number of tags" ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
In @cite_15 , the Tree Slotted Aloha (TSA) protocol is proposed. It aims to reduce tag transmission collisions by querying only those tags colliding in the same slot of a previous frame of transmissions. At the end of each frame, for each slot in which a collision occurred, the reader starts a new small frame, reserved to those tags which collided in the same time slot. In this way, a transmission frame can be viewed as a node in a tree, where the root is the initial frame; leaves represent frames where no collision occurred.
{ "cite_N": [ "@cite_15" ], "mid": [ "2112304434" ], "abstract": [ "In this paper, we approach the problem of identifying a set of objects in an RFID network. We propose a modified version of slotted aloha protocol to reduce the number of transmission collisions. All tags select a slot to transmit their ID by generating a random number. If there is a collision in a slot, the reader broadcasts the next identification request only to tags which collided in that slot. Simulation results show that our approach performs better than framed slotted aloha and query tree based protocols, in terms of number of slots needed to identify all tags, which is a commonly used metric, strictly related to delay." ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
Tree-based tag anti-collision protocols can have a longer identification delay than Slotted Aloha based ones, but they are able to avoid the tag starvation. Among tree-based protocols, there are binary search protocols @cite_25 and query tree protocols @cite_6 .
{ "cite_N": [ "@cite_25", "@cite_6" ], "mid": [ "2097426907", "1977607856" ], "abstract": [ "This paper proposes a modification to the existing anticollision protocol put forth in version 1.0 protocol specification for 900MHz Class 0 RFID Tag. The version 1.0 specification uses a binary tree approach to singulate one RF tag ID at a time. The proposed change reduces the overall read time of a given number of RFID tags by resetting to the appropriate node, for every consecutive read cycle. The present standard resets to the root node of the binary tree for every read cycle.", "This paper presents an efficient collision resolution protocol and its variations for the tag identification problem, where an electromagnetic reader attempts to obtain within is read range the unique ID number of each tag. The novelty of our main protocol is that each tag is memoryless , i.e., the current response of each tag only depends on the current query of the reader but not on the past history of the reader's queries. Moreover, the only computation required for each tag is to match its ID against the binary string in the query. Theoretical resulst in both time and communication complexities are derived to demonstrate the efficiency of our protocols." ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
In query tree protocol (QT), the reader sends a prefix, and all tags having the ID matching the prefix, answer. If there is a collision, the reader queries for a one bit longer prefix, until no collision occurs. Once a tag is identified, the reader starts a new round of queries with another prefix. In @cite_6 , many improvements of QT protocol have been presented, for increasing its performance. Among them, the best performing one is called Query Tree Improved @cite_6 . This protocol avoids the queries that certainly will produce collisions.
{ "cite_N": [ "@cite_6" ], "mid": [ "1977607856" ], "abstract": [ "This paper presents an efficient collision resolution protocol and its variations for the tag identification problem, where an electromagnetic reader attempts to obtain within is read range the unique ID number of each tag. The novelty of our main protocol is that each tag is memoryless , i.e., the current response of each tag only depends on the current query of the reader but not on the past history of the reader's queries. Moreover, the only computation required for each tag is to match its ID against the binary string in the query. Theoretical resulst in both time and communication complexities are derived to demonstrate the efficiency of our protocols." ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
In @cite_29 , a Prefix-Randomized Query Tree protocol is proposed, where tags randomly choose the prefixes rather than using their ID-based ones. The identification time of this protocol is improved with respect to QT protocol because it is no longer affected by the length and distribution of tag IDs as in QT protocol.
{ "cite_N": [ "@cite_29" ], "mid": [ "2150286589" ], "abstract": [ "In this paper we present a new tree search-based protocol for the anti-collision problem of RFID systems. This protocol builds a binary search tree according to the prefixes chosen randomly by tags rather than using their ID-based prefixes. Therefore, the tag identification time of the proposed protocol is no longer limited by the tag ID distribution and ID length as the conventional tree search protocol. The time complexity of the protocol is derived and shown that it can identify tags faster than the Query-Tree protocol." ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
In binary search protocols, @cite_7 @cite_25 , the reader performs identification by recursively splitting the set of answering tags. Each tag has a counter initially set to zero. Only tags with counter set to zero answer the reader's queries. After each tag transmission, the reader notifies the outcome of the query: collision, identification, or no-answer. When tag collision occurs, each tag with counter set to zero, adds a random binary number to its counter. The other tags increase by one their counters. In such a way, the set of answering tags is randomly split into two subsets. After a no-collision transmission, all tags decrease their counters by one.
{ "cite_N": [ "@cite_25", "@cite_7" ], "mid": [ "2097426907", "2116736083" ], "abstract": [ "This paper proposes a modification to the existing anticollision protocol put forth in version 1.0 protocol specification for 900MHz Class 0 RFID Tag. The version 1.0 specification uses a binary tree approach to singulate one RF tag ID at a time. The proposed change reduces the overall read time of a given number of RFID tags by resetting to the appropriate node, for every consecutive read cycle. The present standard resets to the root node of the binary tree for every read cycle.", "The multiaccessing of a broadcast communication channel by independent sources is considered. Previous accessing techniques suffer from long message delays, low throughput, and or congestion instabilities. A new class of high-speed, high-throughput, stable, multiaccessing algorithms is presented. Contentions resolving tree algorithms are introduced, and they are analyzed for specific probabilistic source models. It is shown that these algorithms are stable (in that all moments of delay exist) and are optimal in a certain sense. Furthermore, they have a maximum throughput of 0.430 packets slut and have good delay properties. It is also shown that, under heavy traffic, the optimally controlled tree algorithm adaptively changes to the conventional time-division multiple access protocol." ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
In @cite_2 , the bi-slotted query tree algorithm, and bi-slotted collision tracking tree algorithm are proposed. They reduce the prefix overhead for fast tag identification.
{ "cite_N": [ "@cite_2" ], "mid": [ "2150009405" ], "abstract": [ "This paper is intended to present bi-slotted tree based RFID tag anti-collision protocols, bi-slotted query tree algorithm (BSQTA) and bi-slotted collision tracking tree algorithm (BSCTTA). Diminishing prefix overhead and iteration overhead is a significant issue to minimize the anti-collision cost. For fast tag identification, BSQTA and BSCTTA use time divided responses depending on whether the collided bit is 0' or 1' at each tag ID. According to the simulation results, BSQTA and BSCTTA require less time consumption for tag identification than the other tree based RFID tag anti-collision protocols" ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
@cite_14 @cite_8 @cite_18 proposed two adaptive tag anti-collision protocols: the Adaptive Query Splitting protocol which is an improvement of the Query Tree protocol, and the Adaptive Binary Splitting protocol which is based on the Binary Tree protocol. For reducing collisions, the proposed protocols use information obtained from the last process of tag identification, assuming that in most object tracking and monitoring applications the set of RFID tags encountered in successive reading from a reader does not change substantially and information from a reading process can be used for the next one. An improvement of Adaptive Binary Splitting protocol is proposed in @cite_13 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_13", "@cite_8" ], "mid": [ "2163278169", "2125504058", "2131835677", "2096667520" ], "abstract": [ "Tag identification is an important tool in RFID systems with applications for monitoring and tracking. A RFID reader recognizes tags through communication over a shared wireless channel. When multiple tags transmit their IDs simultaneously, the tag-to-reader signals collide and this collision disturbs a reader's identification process. Therefore, tag collision arbitration for passive tags is a significant issue for fast identification. This paper presents two adaptive tag anticollision protocols: an Adaptive Query Splitting protocol (AQS), which is an improvement on the query tree protocol, and an Adaptive Binary Splitting protocol (ABS), which is based on the binary tree protocol and is a de facto standard for RFID anticollision protocols. To reduce collisions and identify tags efficiently, adaptive tag anticollision protocols use information obtained from the last process of tag identification. Our performance evaluation shows that AQS and ABS outperform other tree-based tag anticollision protocols.", "Tag collision arbitration for passive RFID tags is a significant issue for fast tag identification. This letter presents a novel tag anti-collision scheme called adaptive binary splitting (ABS). For reducing collisions, ABS assigns distinct timeslots to tags by using information obtained from the last identification process. Our performance evaluation shows that ABS outperforms other tree based tag anti-collision protocols.", "Radio frequency identification has been developed for many years and it got much attention from researchers recently as there are lots of applications being used practically in the real world. Owing to the shared wireless channel between tags and reader during communication, the tag collision arbitration is a significant issue for reducing the communication overhead. This paper presents a novel anti-collision algorithm named as EAA (enhanced anti-collision algorithm) which is based on ABS (Adaptive Binary Splitting) algorithm proposed by We improve the ABS algorithm, and inherit the advantages of the ABS algorithm. EAA uses counter, stack, and Manchester code (Bo , 2005) to reduce the probability of collision efficiently. Compared to the methods proposed by other researchers (Bo Feng, 2006), (Jihoon Myung, 2006) the performance evaluation shows that the proposed scheme in this paper uses fewer timeslots for indentifying tags.", "A radio frequency identification (RFID) reader recognizes objects through wireless communications with RFID tags. Tag collision arbitration for passive tags is a significant issue for fast tag identification due to communication over a shared wireless channel. This paper presents an adaptive memoryless protocol, which is an improvement on the query tree protocol. Memoryless means that tags need not have additional memory except ID for identification. To reduce collisions and identify tags promptly, we use information obtained from the last process of tag identification at a reader. Our performance evaluation shows that the adaptive memoryless protocol causes fewer collisions and takes shorter delay for recognizing all tags while preserving lower communication overhead than other tree based tag anticollision protocols" ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
Performance for tags collision problem, is usually computed as the ratio between the number of tags to be identified, and the number of queries (or time slots, in case of Slotted Aloha based protocols) performed in the whole identification process. This metric is referred as @cite_9 @cite_15 .
{ "cite_N": [ "@cite_9", "@cite_15" ], "mid": [ "2133365802", "2112304434" ], "abstract": [ "In RFID system, one of the problems that we must solve is the collision between tags which lowers the efficiency of the RFID system. One of the popular anti-collision algorithms is ALOHA-type algorithms, which are simple and shows good performance when the number of tags to read is small. However, they generally require exponentially increasing number of slots to identify the tags as the number of tag increases. In the paper, we propose a new anti-collision algorithm called enhanced dynamic framed slotted ALOHA (EDFSA) which estimates the number of unread tags first and adjusts the number of responding tags or the frame size to give the optimal system efficiency. As a result, in the proposed method, the number of slots to read the tags increases linearly as the the number of tags does. Simulation results show that the proposed algorithm improves the slot efficiency by 85 spl sim 100 compared to the conventional algorithms when the number of tags is 1000.", "In this paper, we approach the problem of identifying a set of objects in an RFID network. We propose a modified version of slotted aloha protocol to reduce the number of transmission collisions. All tags select a slot to transmit their ID by generating a random number. If there is a collision in a slot, the reader broadcasts the next identification request only to tags which collided in that slot. Simulation results show that our approach performs better than framed slotted aloha and query tree based protocols, in terms of number of slots needed to identify all tags, which is a commonly used metric, strictly related to delay." ] }
0805.1877
1752027081
Radio Frequency IDentification (RFID) systems are becoming more and more popular in the field of ubiquitous computing, in particular for objects identification. An RFID system is composed by one or more readers and a number of tags. One of the main issues in an RFID network is the fast and reliable identification of all tags in the reader range. The reader issues some queries, and tags properly answer. Then, the reader must identify the tags from such answers. This is crucial for most applications. Since the transmission medium is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or limit the number of tags transmission collisions. We propose a protocol which, under some assumptions about transmission techniques, always achieves a 100 perfomance. It is based on a proper recursive splitting of the concurrent tags sets, until all tags have been identified. The other approaches present in literature have performances of about 42 in the average at most. The counterpart is a more sophisticated hardware to be deployed in the manufacture of low cost tags.
All the protocols proposed so far exhibit an average performance (both in terms of messages and in terms of transmitted bits, two metrics strictly related to system efficiency) well below 50 , the best performing protocols, namely QT and TSA, perform around 40 @cite_6 @cite_27 . Such results are reported in the above papers, where the protocols have been proposed, and were substantially confirmed in a paper @cite_28 describing an extensive simulation experiment set up for assessing the average performance of several tag identification protocols.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_6" ], "mid": [ "2058818289", "", "1977607856" ], "abstract": [ "In this paper, we approach the problem of identifying a set of objects in an RFID network. We propose a modified version of Slotted Aloha protocol to reduce the number of transmission collisions. All tags select a slot to transmit their ID by generating a random number. If there is a collision in a slot, the reader broadcasts the next identification request only to tags which collided in that slot. Besides, we present an extensive comparative evaluation of collision resolution protocols for tag identification problem in RFID networks. After a quick survey of the best performing RFID tag identification protocols, both deterministic and probabilistic, we present the outcome of intensive simulation experiments set up to evaluate several metrics, such as the total delay of identification process and the bit complexity of reader and tags. The last metric is strictly related to energy constraints required by an RFID system. The experiments point out that our protocol outperform all the other protocols in most cases, and matches them in the others.", "", "This paper presents an efficient collision resolution protocol and its variations for the tag identification problem, where an electromagnetic reader attempts to obtain within is read range the unique ID number of each tag. The novelty of our main protocol is that each tag is memoryless , i.e., the current response of each tag only depends on the current query of the reader but not on the past history of the reader's queries. Moreover, the only computation required for each tag is to match its ID against the binary string in the query. Theoretical resulst in both time and communication complexities are derived to demonstrate the efficiency of our protocols." ] }
0805.0783
2001550314
Separation logic is a recent extension of Hoare logic for reasoning about programs with references to shared mutable data structures. In this paper, we provide a new interpretation of the logic for a programming language with higher types. Our interpretation is based on Reynolds's relational parametricity, and it provides a formal connection between separation logic and data abstraction.
Our solution to this challenge is to define a more refined semantics of the programming language using FM domain theory, in the style of Benton and Leperchey @cite_16 , in which one can name locations but not observe the identity of locations because of the built-in use of permutation of locations. Part of the trick of is to define the semantics in a continuation-passing style so that one can ensure that new locations are suitably fresh with respect to the remainder of the computation. (See for more details.) Benton and Leperchey used the FM domain-theoretic model to reason about contextual equivalence and here we extend the approach to give a semantics of separation logic in a continuation-passing style. We relate this new interpretation to the standard direct-style interpretation of separation logic via the so-called observation closure @math of a relation, see .
{ "cite_N": [ "@cite_16" ], "mid": [ "1495825275" ], "abstract": [ "We give a monadic semantics in the category of FM-cpos to a higher-order CBV language with recursion and dynamically allocated mutable references that may store both ground data and the addresses of other references, but not functions. This model is adequate, though far from fully abstract. We then develop a relational reasoning principle over the denotational model, and show how it may be used to establish various contextual equivalences involving allocation and encapsulation of store." ] }
0805.0783
2001550314
Separation logic is a recent extension of Hoare logic for reasoning about programs with references to shared mutable data structures. In this paper, we provide a new interpretation of the logic for a programming language with higher types. Our interpretation is based on Reynolds's relational parametricity, and it provides a formal connection between separation logic and data abstraction.
The other main technical challenge in developing a relationally parametric model of separation logic for reasoning about mutable abstract data types is to devise a model which validates a wide range of higher-order frame rules. Our solution to this challenge is to define an intuitionistic interpretation of the specification logic over a Kripke structure, whose ordering relation intuitively captures the framing-in of resources. Technically, the intuitionistic interpretation, in particular the associated Kripke monotonicity, is used to validate a generalized frame rule. Further, to show that the semantics of the logic does indeed satisfy Kripke monotonicity for the base case of triples, we interpret triples using a universal quantifier, which intuitively quantifies over resources that can possibly be framed in. In the earlier non-parametric model of higher-order frame rules for separation-logic typing in @cite_0 we also made use of a Kripke structure. The difference is that in the present work the elements of the Kripke structure are on heaps rather than predicates on heaps because we build a parametric model.
{ "cite_N": [ "@cite_0" ], "mid": [ "1903525885" ], "abstract": [ "We show how to give a coherent semantics to programs that are well-specified in a version of separation logic for a language with higher types: idealized algol extended with heaps (but with immutable stack variables). In particular, we provide simple sound rules for deriving higher-order frame rules, allowing for local reasoning." ] }
0805.0783
2001550314
Separation logic is a recent extension of Hoare logic for reasoning about programs with references to shared mutable data structures. In this paper, we provide a new interpretation of the logic for a programming language with higher types. Our interpretation is based on Reynolds's relational parametricity, and it provides a formal connection between separation logic and data abstraction.
In earlier work, Banerjee and Naumann @cite_19 studied relational parametricity for dynamically allocated heap objects in a Java-like language. Banerjee and Naumann made use of a non-trivial semantic notion of confinement to describe internal resources of a module; here instead we use separation logic, in particular separating conjunction and frame rules, to describe which resources are internal to the module. Our model directly captures that whenever a client has been proved correct in separation logic with respect to an abstract view of a module, then it does not matter how the module has been implemented internally. And, this holds for a higher-order language with higher-order frame rules.
{ "cite_N": [ "@cite_19" ], "mid": [ "2013368693" ], "abstract": [ "Representation independence formally characterizes the encapsulation provided by language constructs for data abstraction and justifies reasoning by simulation. Representation independence has been shown for a variety of languages and constructs but not for shared references to mutable state; indeed it fails in general for such languages. This article formulates representation independence for classes, in an imperative, object-oriented language with pointers, subclassing and dynamic dispatch, class oriented visibility control, recursive types and methods, and a simple form of module. An instance of a class is considered to implement an abstraction using private fields and so-called representation objects. Encapsulation of representation objects is expressed by a restriction, called confinement, on aliasing. Representation independence is proved for programs satisfying the confinement condition. A static analysis is given for confinement that accepts common designs such as the observer and factory patterns. The formalization takes into account not only the usual interface between a client and a class that provides an abstraction but also the interface (often called “protected”) between the class and its subclasses." ] }
0805.0783
2001550314
Separation logic is a recent extension of Hoare logic for reasoning about programs with references to shared mutable data structures. In this paper, we provide a new interpretation of the logic for a programming language with higher types. Our interpretation is based on Reynolds's relational parametricity, and it provides a formal connection between separation logic and data abstraction.
An extended abstract of this paper was presented at the FOSSACS 2007 conference @cite_3 . This paper includes proofs that were missing in the conference version, and describes a general mathematical construction that lies behind our parametric model of separation logic. We also include a new example that illustrates the subtleties of the problems and results.
{ "cite_N": [ "@cite_3" ], "mid": [ "1511487092" ], "abstract": [ "Separation logic is a recent extension of Hoare logic for reasoning about programs with references to shared mutable data structures. In this paper, we provide a new interpretation of the logic for a programming language with higher types. Our interpretation is based on Reynolds's relational parametricity, and it provides a formal connection between separation logic and data abstraction." ] }
0805.1226
2949411402
Two-tier networks, comprising a conventional cellular network overlaid with shorter range hotspots (e.g. femtocells, distributed antennas, or wired relays), offer an economically viable way to improve cellular system capacity. The capacity-limiting factor in such networks is interference. The cross-tier interference between macrocells and femtocells can suffocate the capacity due to the near-far problem, so in practice hotspots should use a different frequency channel than the potentially nearby high-power macrocell users. Centralized or coordinated frequency planning, which is difficult and inefficient even in conventional cellular networks, is all but impossible in a two-tier network. This paper proposes and analyzes an optimum decentralized spectrum allocation policy for two-tier networks that employ frequency division multiple access (including OFDMA). The proposed allocation is optimal in terms of Area Spectral Efficiency (ASE), and is subjected to a sensible Quality of Service (QoS) requirement, which guarantees that both macrocell and femtocell users attain at least a prescribed data rate. Results show the dependence of this allocation on the QoS requirement, hotspot density and the co-channel interference from the macrocell and surrounding femtocells. Design interpretations of this result are provided.
The problem considered in this paper is related to Yeung and Nanda @cite_10 , who propose frequency partitioning in a microcell macrocell system based on mobile speeds and the loading of users in each cell. Similar dynamic channel allocation schemes have been proposed in @cite_27 and @cite_12 . Their frequency partitioning is derived based on choosing handoff velocity thresholds and maximizing the overall system capacity, subject to per-tier blocking probability constraints that ignore co-channel interference (CCI). In contrast, our work determines the spectrum allocation which maximizes the system-wide ASE considering interference from neighboring BSs, path loss and prevailing channel conditions.
{ "cite_N": [ "@cite_27", "@cite_10", "@cite_12" ], "mid": [ "2138798774", "2146974003", "2147773983" ], "abstract": [ "Multitier networks that provide coverage to the same areas by several cells of different sizes are useful to accommodate high traffic density while keeping high quality of service. The author gives an overview of a number of contributions on this subject. Two main issues are considered: spectrum sharing between different layers with a focus on F TDMA systems and teletraffic performance of multitier networks given different handover policies.", "In this paper, we study spectrum management in a two-tier microcell macrocell cellular system. Two issues are studied: micro-macro cell selection and frequency spectrum partitioning between microcells and macrocells. To keep the handoff rate in a two-tier cellular system at an acceptable level, low mobility users (with speed spl upsi V sub 0 ) should undergo handoffs at macrocell boundaries. The mobile determines user mobility from microcell sojourn times and uses it for channel assignment at call origination and handoff. The probability of erroneous assignment of a mobile to a microcell or macrocell is shown to be significantly lower than previous approaches. We investigate the optimal velocity threshold, V sub 0 , and propose that it may be dynamically adjusted according to traffic load. Finally, we propose a systematic way for finding an optimal partition of frequency spectrum between microcells and macrocells. This partitioning is based on the traffic load and velocity distribution of mobiles in the system.", "The umbrella cell system, where the same radio system is used for microcells and overlaying macrocells, is the most promising strategy for deploying microcell service to cope with increased portable radio subscribers. A practical approach to implementing a microcell system overlaid with an existing macrocell system is proposed. There is no need to design the channel assignment and transmit power management for the microcell system by introducing the channel segregation, a self-organized dynamic channel assignment and the automatic transmit power control. The system channels are reused automatically between the macrocells and microcells. The interference from the macrocell to the microcell is compensated by slight increase of transmit power for the microcell system. It is shown by computer simulation that locally increased traffic is accommodated with the microcells laid under macrocells without any effort for channel management. >" ] }
0804.4204
2950730615
In wireless networks, the knowledge of nodal distances is essential for several areas such as system configuration, performance analysis and protocol design. In order to evaluate distance distributions in random networks, the underlying nodal arrangement is almost universally taken to be an infinite Poisson point process. While this assumption is valid in some cases, there are also certain impracticalities to this model. For example, practical networks are non-stationary, and the number of nodes in disjoint areas are not independent. This paper considers a more realistic network model where a finite number of nodes are uniformly randomly distributed in a general d-dimensional ball of radius R and characterizes the distribution of Euclidean distances in the system. The key result is that the probability density function of the distance from the center of the network to its nth nearest neighbor follows a generalized beta distribution. This finding is applied to study network characteristics such as energy consumption, interference, outage and connectivity.
In @cite_9 , the probability density function (pdf) and cumulative distribution function (cdf) of distances between nodes are derived for networks with uniformly random and Gaussian distributed nodes over a rectangular area. @cite_4 studies mean internodal distance properties for several kinds of multihop systems such as ring networks, Manhattan street networks, hypercubes and shufflenets. @cite_1 provides closed-form expressions for the distributions in @math -dimensional homogeneous PPPs and describes several applications of the results for large networks. @cite_5 derives the joint distribution of distances of nodes from a common reference point for planar networks with a finite number of nodes randomly distributed on a square. @cite_7 considers square random networks and determines the pdf and cdf of nearest neighbor and internodal distances. @cite_13 investigates one-dimensional multihop networks with randomly located nodes and analyzes the distributions of single-hop and multiple-hop distances.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_5", "@cite_13" ], "mid": [ "2108666886", "1848925358", "", "2102413189", "2158011976", "" ], "abstract": [ "The minimum necessary aggregate link capacity in a telecommunication network is directly proportional to the mean distance between nodes. The mean internodal distance is therefore an important network characteristic. It is shown that most network topologies, including those constructed at random, display mean internodal distances comparable to those of many carefully designed networks. Thus, careful selection of network topology to minimize the mean internodal distance may be important in only the most sensitive applications. Furthermore, even in such sensitive applications, an almost randomly chosen network topology may be the best choice. >", "Separation distance between nodes is an important index in characterizing the optimum transmission range, the most probable Euclidean distance between two random selected nodes, the node degree and the network connectivity of wireless ad hoc networks. However, because nodes are randomly deployed, the separation distances between nodes in the wireless ad hoc networks are also random. Thus, in this paper, we present methodologies to analyze three distance-related probability distributions: the distribution of the distance to the k-th nearest neighbor, the distribution of the distance between two random selected nodes and the joint distribution of the distances between nodes and a common reference node.", "", "The distribution of Euclidean distances in Poisson point processes is determined. The main result is the density function of the distance to the n-nearest neighbor of a homogeneous process in Ropfm, which is shown to be governed by a generalized Gamma distribution. The result has many implications for large wireless networks of randomly distributed nodes", "The calculation of two-hop connectivity be- tween two terminals for randomly deployed wireless networks requires the joint probability distribution of the distances between these terminals and the terminal that is acting as a relay. In general the distances are not independent since a common terminal is involved. The marginal distributions for link distances are known for various random deployment models. However, the joint distribution of two or more link distances is not known. In this paper, the derivation of the joint distribution is given in general form and in a new form suitable for computation for a network of terminals randomly deployed in a square area.", "" ] }
0804.4356
1629774830
Researchers have devoted themselves to exploring static features of social networks and further discovered many representative characteristics, such as power law in the degree distribution and assortative value used to differentiate social networks from nonsocial ones. However, people are not satisfied with these achievements and more and more attention has been paid on how to uncover those dynamic characteristics of social networks, especially how to track community evolution effectively. With these interests, in the paper we firstly display some basic but dynamic features of social networks. Then on its basis, we propose a novel core-based algorithm of tracking community evolution, CommTracker, which depends on core nodes to establish the evolving relationships among communities at different snapshots. With the algorithm, we discover two unique phenomena in social networks and further propose two representative coefficients: GROWTH and METABOLISM by which we are also able to distinguish social networks from nonsocial ones from the dynamic aspect. At last, we have developed a social network model which has the capabilities of exhibiting two necessary features above.
A lot of work has been dedicated to exploring the characteristics of social networks. Barabasi and Albert show an uneven distribution of degree through BA models @cite_16 . Newman has successfully discovered distinct characteristics between social networks and nonsocial ones @cite_14 . Various methods have been utilized to detect community structures. Among them, there are Newman's betweenness algorithm @cite_12 @cite_10 , Nan Du's clique-based algorithm @cite_3 and CPM @cite_2 that focuses on finding overlapping communities. Clustering is another technique to group similar nodes into large communities, including L. Donetti and M.Miguel's method @cite_9 which exploits spectral properties of the graph as well as Laplacian matrix and J. Hopcroft's natural community'' approach @cite_17 . Some social network models have been proposed @cite_5 @cite_1 @cite_11 .
{ "cite_N": [ "@cite_14", "@cite_11", "@cite_9", "@cite_1", "@cite_3", "@cite_2", "@cite_5", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2040956707", "1999762141", "1998837416", "1530582735", "2047013860", "", "2099021783", "2008620264", "2089458547", "1971421925", "" ], "abstract": [ "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.", "We consider a dynamic social network model in which agents play repeated games in pairings determined by a stochastically evolving social network. Individual agents begin to interact at random, with the interactions modeled as games. The game payoffs determine which interactions are reinforced, and the network structure emerges as a consequence of the dynamics of the agents' learning behavior. We study this in a variety of game-theoretic conditions and show that the behavior is complex and sometimes dissimilar to behavior in the absence of structural dynamics. We argue that modeling network structure as dynamic increases realism without rendering the problem of analysis intractable.", "An efficient and relatively fast algorithm for the detection of communities in complex networks is introduced. The method exploits spectral properties of the graph Laplacian matrix combined with hierarchical clustering techniques, and includes a procedure for maximizing the 'modularity' of the output. Its performance is compared with that of other existing methods, as applied to different well-known instances of complex networks with a community structure, both computer generated and from the real world. Our results are, in all the cases tested, at least as good as the best ones obtained with any other methods, and faster in most of the cases than methods providing similar quality results. This converts the algorithm into a valuable computational tool for detecting and analysing communities and modular structures in complex networks.", "The seceder model illustrates how the desire to be different from the average can lead to formation of groups in a population. We turn the original, agent based, seceder model into a model of netwo ...", "Recent years have seen that WWW is becoming a flourishing social media which enables individuals to easily share opinions, experiences and expertise at the push of a single button. With the pervasive usage of instant messaging systems and the fundamental shift in the ease of publishing content, social network researchers and graph theory researchers are now concerned with inferring community structures by analyzing the linkage patterns among individuals and web pages. Although the investigation of community structures has motivated many diverse algorithms, most of them are unsuitable for large-scale social networks because of the computational cost. Moreover, in addition to identify the possible community structures, how to define and explain the discovered communities is also significant in many practical scenarios. In this paper, we present the algorithm ComTector(Community DeTector) which is more efficient for the community detection in large-scale social networks based on the nature of overlapping communities in the real world. This algorithm does not require any priori knowledge about the number or the original division of the communities. Because real networks are often large sparse graphs, its running time is thus O(C × Tri2), where C is the number of the detected communities and Tri is the number of the triangles in the given network for the worst case. Then we propose a general naming method by combining the topological information with the entity attributes to define the discovered communities. With respected to practical applications, ComTector is challenged with several real life networks including the Zachary Karate Club, American College Football, Scientific Collaboration, and Telecommunications Call networks. Experimental results show that this algorithm can extract meaningful communities that are agreed with both of the objective facts and our intuitions.", "", "We propose some simple models of the growth of social networks, based on three general principles: (1) meetings take place between pairs of individuals at a rate which is high if a pair has one or more mutual friends and low otherwise; (2) acquaintances between pairs of individuals who rarely meet decay over time; (3) there is an upper limit on the number of friendships an individual can maintain. using computer simulations, we find that models that incorporatge all of these features reproduce many of the features of real social networks, including high levels of clustering or network transitivity and strong community structure in which individuals have more links to others within their community than to individuals from other communities.", "Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.", "Many networks display community structure---groups of vertices within which connections are dense but between which they are sparser---and sensitive computer algorithms have in recent years been developed for detecting this structure. These algorithms, however, are computationally demanding, which limits their application to small networks. Here we describe an algorithm which gives excellent results when tested on both computer-generated and real-world networks and is much faster, typically thousands of times faster, than previous algorithms. We give several example applications, including one to a collaboration network of more than 50 000 physicists.", "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.", "" ] }
0804.4356
1629774830
Researchers have devoted themselves to exploring static features of social networks and further discovered many representative characteristics, such as power law in the degree distribution and assortative value used to differentiate social networks from nonsocial ones. However, people are not satisfied with these achievements and more and more attention has been paid on how to uncover those dynamic characteristics of social networks, especially how to track community evolution effectively. With these interests, in the paper we firstly display some basic but dynamic features of social networks. Then on its basis, we propose a novel core-based algorithm of tracking community evolution, CommTracker, which depends on core nodes to establish the evolving relationships among communities at different snapshots. With the algorithm, we discover two unique phenomena in social networks and further propose two representative coefficients: GROWTH and METABOLISM by which we are also able to distinguish social networks from nonsocial ones from the dynamic aspect. At last, we have developed a social network model which has the capabilities of exhibiting two necessary features above.
With respect to core node detection, Roger Guimera and Luis A.Nunes Amaral propose a methodology that classifies nodes into universal roles according to their pattern of intra- and inter-module connections @cite_0 . B. Wu offers a method to detect core nodes with a threshold @cite_6 . Shaojie Qiao and Qihong Liu dedicate themselves to mining core members of a crime community @cite_8 .
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_8" ], "mid": [ "2017987256", "2132106171", "1579798696" ], "abstract": [ "High-throughput techniques are leading to an explosive growth in the size of biological databases and creating the opportunity to revolutionize our understanding of life and disease. Interpretation of these data remains, however, a major scientific challenge. Here, we propose a methodology that enables us to extract and display information contained in complex networks1,2,3. Specifically, we demonstrate that we can find functional modules4,5 in complex networks, and classify nodes into universal roles according to their pattern of intra- and inter-module connections. The method thus yields a ‘cartographic representation’ of complex networks. Metabolic networks6,7,8 are among the most challenging biological networks and, arguably, the ones with most potential for immediate applicability9. We use our method to analyse the metabolic networks of twelve organisms from three different superkingdoms. We find that, typically, 80 of the nodes are only connected to other nodes within their respective modules, and that nodes with different roles are affected by different evolutionary constraints and pressures. Remarkably, we find that metabolites that participate in only a few reactions but that connect different modules are more conserved than hubs whose links are mostly within a single module.", "Recently there has been considerable interest in the study of community detection in social network. However, to get more detailed knowledge about the global organization of the whole network and the discovered communities, how to explain and utilize these communities will be far more significant in many practical scenarios. Thus, we propose the problem of resume mining of social network communities. We also study three important aspects of resume mining: the characterization of community , the discrimination among communities and the community evolution mining. Unlike other similar algorithms, our solutions fully consider the inner topology of a community together with the attributes of nodes. Then we also study two cases: the first is about mobile call graphs and the second is about co-authorship networks. The result shows that the community resume found by our methods represents state and history of a community clearly.", "Since the incident about 9.11, the Security Sectors of many countries have put great attentions on gathering and mining of crime data and establishing anti-terrorist databases. With the emergence of anti-terrorist application, data mining for anti-terrorist has attracted great attention from both researchers and officers as well as in China. The purpose of analyzing and mining related terrorist or crimes data is that analyzing of psychology, behavior and related laws about crime, and providing hidden clues to prevent a criminal case, and forecasting terror happening to keeping with crime limits." ] }
0804.4356
1629774830
Researchers have devoted themselves to exploring static features of social networks and further discovered many representative characteristics, such as power law in the degree distribution and assortative value used to differentiate social networks from nonsocial ones. However, people are not satisfied with these achievements and more and more attention has been paid on how to uncover those dynamic characteristics of social networks, especially how to track community evolution effectively. With these interests, in the paper we firstly display some basic but dynamic features of social networks. Then on its basis, we propose a novel core-based algorithm of tracking community evolution, CommTracker, which depends on core nodes to establish the evolving relationships among communities at different snapshots. With the algorithm, we discover two unique phenomena in social networks and further propose two representative coefficients: GROWTH and METABOLISM by which we are also able to distinguish social networks from nonsocial ones from the dynamic aspect. At last, we have developed a social network model which has the capabilities of exhibiting two necessary features above.
As to dynamic graph mining, Tanya Y.Berger-Wolf and Jared Saia study community evolution based on node overlapping @cite_19 ; John Hopcroft and Omar Khan propose a method which utilizes nature community" to track evolution @cite_7 . However, both methods have to set some parameters, which is too difficult to be adaptive to various situations. In contrast, suggests the notion of parameter free data mining @cite_18 . Jimeng Sun's GraphScope is a parameter-free mining method of large time-evolving graphs @cite_13 , using information theoretic principles. Our method in the paper shares the same spirit.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_13", "@cite_7" ], "mid": [ "", "2166064672", "2155640700", "2129043771" ], "abstract": [ "", "Most data mining algorithms require the setting of many input parameters. Two main dangers of working with parameter-laden algorithms are the following. First, incorrect settings may cause an algorithm to fail in finding the true patterns. Second, a perhaps more insidious problem is that the algorithm may report spurious patterns that do not really exist, or greatly overestimate the significance of the reported patterns. This is especially likely when the user fails to understand the role of parameters in the data mining process.Data mining algorithms should have as few parameters as possible, ideally none. A parameter-free algorithm would limit our ability to impose our prejudices, expectations, and presumptions on the problem at hand, and would let the data itself speak to us. In this work, we show that recent results in bioinformatics and computational theory hold great promise for a parameter-free data-mining paradigm. The results are motivated by observations in Kolmogorov complexity theory. However, as a practical matter, they can be implemented using any off-the-shelf compression algorithm with the addition of just a dozen or so lines of code. We will show that this approach is competitive or superior to the state-of-the-art approaches in anomaly interestingness detection, classification, and clustering with empirical tests on time series DNA text video datasets.", "How can we find communities in dynamic networks of socialinteractions, such as who calls whom, who emails whom, or who sells to whom? How can we spot discontinuity time-points in such streams of graphs, in an on-line, any-time fashion? We propose GraphScope, that addresses both problems, using information theoretic principles. Contrary to the majority of earlier methods, it needs no user-defined parameters. Moreover, it is designed to operate on large graphs, in a streaming fashion. We demonstrate the efficiency and effectiveness of our GraphScope on real datasets from several diverse domains. In all cases it produces meaningful time-evolving patterns that agree with human intuition.", "We are interested in tracking changes in large-scale data by periodically creating an agglomerative clustering and examining the evolution of clusters (communities) over time. We examine a large real-world data set: the NEC CiteSeer database, a linked network of >250,000 papers. Tracking changes over time requires a clustering algorithm that produces clusters stable under small perturbations of the input data. However, small perturbations of the CiteSeer data lead to significant changes to most of the clusters. One reason for this is that the order in which papers within communities are combined is somewhat arbitrary. However, certain subsets of papers, called natural communities, correspond to real structure in the CiteSeer database and thus appear in any clustering. By identifying the subset of clusters that remain stable under multiple clustering runs, we get the set of natural communities that we can track over time. We demonstrate that such natural communities allow us to identify emerging communities and track temporal changes in the underlying structure of our network data." ] }
0804.4356
1629774830
Researchers have devoted themselves to exploring static features of social networks and further discovered many representative characteristics, such as power law in the degree distribution and assortative value used to differentiate social networks from nonsocial ones. However, people are not satisfied with these achievements and more and more attention has been paid on how to uncover those dynamic characteristics of social networks, especially how to track community evolution effectively. With these interests, in the paper we firstly display some basic but dynamic features of social networks. Then on its basis, we propose a novel core-based algorithm of tracking community evolution, CommTracker, which depends on core nodes to establish the evolving relationships among communities at different snapshots. With the algorithm, we discover two unique phenomena in social networks and further propose two representative coefficients: GROWTH and METABOLISM by which we are also able to distinguish social networks from nonsocial ones from the dynamic aspect. At last, we have developed a social network model which has the capabilities of exhibiting two necessary features above.
As forerunners, A.L.Barabasi and H.Jeong study static characteristic variations on the network of scientific collaboration @cite_15 . Gergely Palla and A.-L. Barabasi provide a method which effectively utilizes edge overlapping to build evolving relationship @cite_4 . With the approach, they discover valuable phenomena of social community evolution.
{ "cite_N": [ "@cite_15", "@cite_4" ], "mid": [ "2145845082", "2432978112" ], "abstract": [ "The co-authorship network of scientists represents a prototype of complex evolving networks. In addition, it offers one of the most extensive database to date on social networks. By mapping the electronic database containing all relevant journals in mathematics and neuro-science for an 8-year period (1991–98), we infer the dynamic and the structural mechanisms that govern the evolution and topology of this complex system. Three complementary approaches allow us to obtain a detailed characterization. First, empirical measurements allow us to uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The results indicate that the network is scale-free, and that the network evolution is governed by preferential attachment, affecting both internal and external links. However, in contrast with most model predictions the average degree increases in time, and the node separation decreases. Second, we propose a simple model that captures the network's time evolution. In some limits the model can be solved analytically, predicting a two-regime scaling in agreement with the measurements. Third, numerical simulations are used to uncover the behavior of quantities that could not be predicted analytically. The combined numerical and analytical results underline the important role internal links play in determining the observed scaling behavior and network topology. The results and methodologies developed in the context of the co-authorship network could be useful for a systematic study of other complex evolving networks as well, such as the world wide web, Internet, or other social networks.", "The processes by which communities come together, attract new members, and develop over time is a central research issue in the social sciences - political movements, professional organizations, and religious denominations all provide fundamental examples of such communities. In the digital domain, on-line groups are becoming increasingly prominent due to the growth of community and social networking sites such as MySpace and LiveJournal. However, the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities has left most basic questions about the evolution of such groups largely unresolved: what are the structural features that influence whether individuals will join communities, which communities will grow rapidly, and how do the overlaps among pairs of communities change over time.Here we address these questions using two large sources of data: friendship links and community membership on LiveJournal, and co-authorship and conference publications in DBLP. Both of these datasets provide explicit user-defined communities, where conferences serve as proxies for communities in DBLP. We study how the evolution of these communities relates to properties such as the structure of the underlying social networks. We find that the propensity of individuals to join communities, and of communities to grow rapidly, depends in subtle ways on the underlying network structure. For example, the tendency of an individual to join a community is influenced not just by the number of friends he or she has within the community, but also crucially by how those friends are connected to one another. We use decision-tree techniques to identify the most significant structural determinants of these properties. We also develop a novel methodology for measuring movement of individuals between communities, and show how such movements are closely aligned with changes in the topics of interest within the communities." ] }
0804.4662
2949541653
In this paper the performance limits and design principles of rateless codes over fading channels are studied. The diversity-multiplexing tradeoff (DMT) is used to analyze the system performance for all possible transmission rates. It is revealed from the analysis that the design of such rateless codes follows the design principle of approximately universal codes for parallel multiple-input multiple-output (MIMO) channels, in which each sub-channel is a MIMO channel. More specifically, it is shown that for a single-input single-output (SISO) channel, the previously developed permutation codes of unit length for parallel channels having rate LR can be transformed directly into rateless codes of length L having multiple rate levels (R, 2R, . . ., LR), to achieve the DMT performance limit.
Rateless coding may be considered as a type of Hybrid-ARQ scheme @cite_0 . The DMT for ARQ has been revealed in @cite_0 . However, it will be shown in the paper that this DMT curve was incomplete and represents the performance only when @math in which @math and @math are the number of transmit and receive antennas. The DMT curve for rateless coding including those parts for higher @math has never been revealed before, and will be shown in this paper. In addition to this, the results in this paper also offer a relationship between the design parameter (i.e., @math and @math ) and the effective multiplexing gain @math of the system, thus offer further insights into system design and operational meaning compared to conventional coding schemes. Furthermore, we suggest new design solutions for rateless codes. Previous work on finite-rate feedback MIMO channels relies on either power control or adaptive modulation and coding (e.g., @cite_3 ), which are not necessary for our scheme.
{ "cite_N": [ "@cite_0", "@cite_3" ], "mid": [ "2159960823", "2155797792" ], "abstract": [ "In this paper, the fundamental performance tradeoff of the delay-limited multiple-input multiple-output (MIMO) automatic retransmission request (ARQ) channel is explored. In particular, we extend the diversity-multiplexing tradeoff investigated by Zheng and Tse in standard delay-limited MIMO channels with coherent detection to the ARQ scenario. We establish the three-dimensional tradeoff between reliability (i.e., diversity), throughput (i.e., multiplexing gain), and delay (i.e., maximum number of retransmissions). This tradeoff quantifies the ARQ diversity gain obtained by leveraging the retransmission delay to enhance the reliability for a given multiplexing gain. Interestingly, ARQ diversity appears even in long-term static channels where all the retransmissions take place in the same channel state. Furthermore, by relaxing the input power constraint allowing variable power levels in different retransmissions, we show that power control can be used to dramatically increase the diversity advantage. Our analysis reveals some important insights on the benefits of ARQ in slow-fading MIMO channels. In particular, we show that 1) allowing for a sufficiently large retransmission delay results in an almost flat diversity-multiplexing tradeoff, and hence, renders operating at high multiplexing gain more advantageous; 2) MIMO ARQ channels quickly approach the ergodic limit when power control is employed. Finally, we complement our information-theoretic analysis with an incremental redundancy lattice space-time (IR-LAST) coding scheme which is shown, through a random coding argument, to achieve the optimal tradeoff(s). An integral component of the optimal IR-LAST coding scheme is a list decoder, based on the minimum mean-square error (MMSE) lattice decoding principle, for joint error detection and correction. Throughout the paper, our theoretical claims are validated by numerical results", "The diversity-multiplexing (D-M) tradeoff in a multi antenna channel with optimized resolution-constrained channel state feedback is characterized. The concept of minimum guaranteed multiplexing gain in the forward link is introduced and shown to significantly influence the optimal D-M tradeoff. It is demonstrated that power control based on the feedback is instrumental in achieving the D-M tradeoff, and that rate adaptation is important in obtaining a high diversity gain even at high rates. A criterion to determine finite-length codes to be tradeoff optimal is presented, leading to a useful geometric characterization of the class of extended approximately universal codes. With codes from this class, the optimal D-M tradeoff is achievable by the combination of a feedback-dependent power controller and a single code-book for single-rate or two codebooks for adaptive-rate transmission. Finally, lower bounds to the optimal D-M tradeoffs based on Gaussian coding arguments are also studied. In contrast to the no-feedback case, these random coding bounds are only asymptotically tight, but can quickly approach the optimal tradeoff even with moderate codeword lengths." ] }
0804.3215
2166476129
a b s t r a c t Packet-switching WDM ring networks with a hotspot transporting unicast, multicast, and broadcast traffic are important components of high-speed metropolitan area networks. For an arbitrary multicast fanout traffic model with uniform, hotspot destination, and hotspot source packet traffic, we analyze the maximum achievable long-run average packet throughput, which we refer to as multicast capacity, of bi-directional shortest path routed WDM rings. We identify three segments that can experience the maximum utilization, and thus, limit the multicast capacity. We characterize the segment utilization probabilities through bounds and approximations, which we verify through simulations. We discover that shortest path routing can lead to utilization probabilities above one half for moderate to large portions of hotspot source multi- and broadcast traffic, and consequently multicast capacities of less than two simultaneous packet transmissions. We outline a one-copy routing strategy that guarantees a multicast capacity of at least two simultaneous packet transmissions for arbitrary hotspot source traffic.
There has been increasing research interest in recent years for the wide range of aspects of multicast in general mesh circuit-switched WDM networks, including lightpath design, see for instance @cite_29 , traffic grooming, see e.g., @cite_11 , routing and wavelength assignment, see e.g., @cite_45 @cite_58 @cite_56 , and connection carrying capacity @cite_32 . Similarly, multicasting in packet-switched single-hop star WDM networks has been intensely investigated, see for instance @cite_60 @cite_67 @cite_26 @cite_18 . In contrast to these studies, we focus on packet-switched WDM ring networks in this paper.
{ "cite_N": [ "@cite_67", "@cite_26", "@cite_18", "@cite_60", "@cite_29", "@cite_32", "@cite_56", "@cite_45", "@cite_58", "@cite_11" ], "mid": [ "2123459909", "2121969043", "2166211341", "2123833258", "2103992844", "2108133322", "1971950071", "2162923925", "2135880212", "2137396213" ], "abstract": [ "We investigate optical network unit (ONU) grant scheduling techniques for multichannel Ethernet passive optical networks (EPONs), such as wavelength division multiplexed (WDM) EPONs. We take a scheduling theoretic approach to solving the grant scheduling problem. We introduce a two-layer structure of the scheduling problem and investigate techniques to be used at both layers. We present an extensive ONU grant scheduling simulation study that provides: 1) insight into the nature of the ONU grant scheduling problem and 2) indication of which scheduling techniques are best for certain conditions. We find that the choice of scheduling framework has typically the largest impact on average queueing delay and achievable channel utilization. An offline scheduling framework is not work conserving and consequently wastes channel resources while waiting for all ONU REPORT messages before making access decisions. An online scheduling framework, although work conserving, does not provide the best performance since scheduling decisions are made with the information contained in a single ONU REPORT. We propose a novel online just-in-time (JIT) scheduling framework that is work conserving while increasing scheduling control by allowing the channel availability to drive the scheduling process. In online JIT, multiple ONU REPORTs can be considered together when making scheduling decisions, resulting in lower average queueing delay under certain conditions and a more effective service differentiation of ONUs.", "This paper shows that, for single-hop WDM networks, a multicast scheduling algorithm which always tries to partition a multicast transmission into multiple unicast or multicast transmissions may not always produce lower mean packet delay than a multicast scheduling algorithm which does not partition multicast transmissions. The performance of a multicast scheduling algorithm may depend on the traffic conditions and the availability of the channel resource in the network. A hybrid multicast scheduling algorithm that can produce good performance for wide ranges of the traffic conditions and the availability of the channel resource in the network is proposed. Depending on the average utilizations of the data channels and the receivers, the proposed hybrid multicast scheduling algorithm dynamically chooses to employ a multicast scheduling algorithm which always tries to partition multicast transmissions or a multicast scheduling algorithm which does not partition multicast transmissions. Extensive simulations are performed to study the performance of the proposed hybrid algorithm. Our simulation results show that the proposed hybrid algorithm produces lower mean packet delay for wide ranges of the load, the maximum multicast group size, the percentage of unicast traffic, and the number of data channels in the network compared with a multicast scheduling algorithm which always tries to partition multicast transmissions and a multicast scheduling which does not partition multicast transmissions.", "In this paper, we present a reservation-based medium access control (MAC) protocol with multicast support for wavelength-division multiplexing networks. Our system is based on the single-hop, passive optical star architecture. Of the available wavelengths (channels), one channel is designated as a control channel, and the remaining channels are used for data transmission. Each node is equipped with a pair of fixed transceiver to access the control channel, and a fixed transmitter and a tunable receiver to access data channels. For easy implementation of the protocol in hardware and for precisely computing the protocol's processing overhead, we give a register-transfer model of the protocol. We simulate the protocol to study its throughput behavior, and present its analytic model. For a node to be able to send data packets in successive data slots with no time gap between them, in spite of the situation that the protocol's execution time may be longer than data transmission time, we propose the idea of multiple MAC units at each node. Unicast throughput of our protocol reaches the theoretically possible maximum throughput for MAC protocols with distributed control, and the multicast throughput is at least as good as, and even better than, those delivered by existing MAC protocols with distributed control.", "Multicast communication in single-hop broadcast-and-select wavelength-division multiplexing networks has received considerable attention from researchers. This article presents a comprehensive survey of the multicast scheduling techniques in this environment. It considers different challenges that are faced in the design of the multicasting techniques and presents a classification of such schemes. A survey of specific techniques is then presented and a comparison is drawn between such techniques.", "With the advent of next-generation, bandwidth-intensive multimedia applications such as HDTV, interactive distance learning, and movie broadcasts from studios, it is becoming imperative to exploit the enormous bandwidth promised by the rapidly growing wavelength-division-multiplexing (WDM) technology. These applications require multicasting of information from a source to several destination nodes which should be performed judiciously to conserve expensive network resources. In this study, we investigate two switch architectures to support multicasting in a WDM network: one using an opaque (optical-electronic-optical approach and the other using a transparent (all-optical) approach. For both these switch architectures, we present mathematical formulations for routing and wavelength assignment of several light-tree-based multicast sessions on a given network topology at a globally optimal cost. We expand our work to also accommodate: 1) fractional-capacity sessions (where a session's capacity is a fraction of a wavelength channel's bandwidth, thereby leading to \"traffic-groomed\" multicast sessions) and 2) sparse splitting constraints, i.e., limited fanout of optical splitters and limited number of such splitters at each node. We illustrate the solutions obtained on different networks by solving these optimization problems, which turn out to be mixed integer linear programs (MILPs). Because the MILP is computationally intensive and does not scale well for large problem sizes, we also propose fast heuristics for establishing a set of multicast sessions in a network with or without wavelength converters and with fractional-capacity sessions. We find that, for all scenarios, the heuristics which arrange the sessions in ascending order with respect to destination set size and or cost perform better in terms of network resource usage than the heuristics which arrange the sessions in descending order.", "Currently, many bandwidth-intensive applications require multicast services for efficiency purposes. In particular, as wavelength division multiplexing (WDM) technique emerges as a promising solution to meet the rapidly growing demands on bandwidth in present communication networks, supporting multicast at the WDM layer becomes an important yet challenging issue. In this paper, we introduce a systematic approach to analyzing the multicast connection capacity of WDM switching networks with limited wavelength conversion. We focus on the practical all-optical limited wavelength conversion with a small conversion degree d (e.g., d = 2 or 3), where an incoming wavelength can be switched to one of the d outgoing wavelengths. We then compare the multicast performance of the network with limited wavelength conversion to that of no wavelength conversion and full wavelength conversion. Our results demonstrate that limited wavelength conversion with small conversion degrees provides a considerable fraction of the performance improvement obtained by full wavelength conversion over no wavelength conversion. We also present an economical multistage switching architecture for limited wavelength conversion. Our results indicate that the multistage switching architecture along with limited wavelength conversion of small degrees is a cost-effective design for WDM multicast switching networks.", "A novel media access control (MAC) protocol named carrier sense multiple access with idle detection (CSMA ID) is proposed to handle variable-length packets over an all-optical ring network. To evaluate optimal utilization of channel bandwidth, we study packet scheduling based on three transmitting queue discipline (TQD) architectures and four idle space allocation (ISA) algorithms with regard to their impact on performance. For numerical evaluation of performance, an analytical model is developed by a preclassification queue with weighted round-robin (PCQWRR) architecture and a random algorithm. Moreover, three related MAC protocols are examined and compared, namely, multitoken, carrier sense multiple access collision avoidance (CSMA CA) and carrier sense multiple access collision preemption (CSMA CP). Simulation results indicate that, of the TQDs, better performance is obtained by PCQWRR compared with first-in-first-out and preclassification queues. The first fit space (FFS) algorithm has the best performance of the ISAs. The 12 combinations of TQDs-ISAs are then considered. It is found that the combination of PCQWRR with FFS provides the greatest efficiency and has the lowest packet latency, providing better throughput than three different MAC protocols under either symmetric or asymmetric traffic load on all-optical ring networks.", "This paper addresses the multicast wavelength assignment (MC-WA) problem in wavelength-routed WDM networks with full light splitting and wavelength conversion capabilities. Current approaches are based on the multicast switch model that supports only split-convert (S-C) switch scheme. This scheme leads to redundant wavelength conversions for a given multicast request. In this paper, we propose a new split-convert-split (S-C-S) switch scheme capable of eliminating the redundant wavelength conversions. In order to implement this new switch scheme, we develop a new multicast switch model based on the concept of sharing of light splitters and wavelength converters. Furthermore, existing multicast wavelength assignment algorithm allows only one wavelength to carry the light signal on a fiber link, the so-called single-wavelength assignment strategy. In this paper, we explore the advantages of a new multi-wavelength assignment strategy which allows multiple available wavelengths in a link to carry the multicast signal. This will reduce the number of wavelength conversions required for the multicast request. Consequently, based on the new S-C-S multicast switch model and the new multi-wavelength assignment strategy, we generalize the existing algorithms to produce a new Multicast 'Wavelength Assignment Algorithm (MWAA) to support both the new switch model and the new wavelength assignment strategy. Compared with the existing algorithm, our new algorithm is a more general one which makes the multicast wavelength assignment more flexible, covering different switch schemes and different assignment strategies. In addition, it delivers good performance in term of minimizing the number of wavelength conversions. The improvement percentage is sensitive to the maximum out-degree value of a node, D. For a 100-node multicast tree, the improvement percentage increases from 38 at D = 3 to about 73 at D = 16. This is highly significant", "Multicasting is becoming increasingly important in today's networks. In optical networks, optical splitters facilitate the multicasting of optical signals. By eliminating the transmission of redundant traffic over certain links, multicasting can improve network performance. However, in a wavelength-division multiplexed (WDM) optical network, the lack of wavelength conversion necessitates the establishment of a single multicast circuit (light-tree) on a single wavelength. On the other hand, establishing several unicast connections (lightpaths) to satisfy a multicast request, while requiring more capacity, is less constrained in terms of wavelength assignment. The objective of the paper is to evaluate the tradeoff between capacity and wavelength continuity in the context of optical multicasting. To this end, we develop accurate analytical models with moderate complexity for computing the blocking probability of multicast requests realized using light-trees, lightpaths, and combinations of light-trees and lightpaths. Numerical results indicate that a suitable combination of light-trees and lightpaths performs best when no wavelength conversion is present.", "In this paper we consider the optimal design and provisioning of WDM networks for the grooming of multicast subwavelength traffic. We develop a unified framework for the optimal provisioning of different practical scenarios of multicast traffic grooming. We also introduce heuristic solutions. Optimal solutions are designed by exploiting the specifies of the problems to formulate Mixed Integer Linear Programs (MILPs). Specifically, we solve the generic multicast problem in which, given a set of multicast sessions and all destination nodes of a multicast session requiring the same amount of traffic, all demands need to be accommodated. The objective is to minimize the network cost by minimizing the number of higher layer electronic equipment and, simultaneously, minimizing the total number of wavelengths used. We also solve two interesting and practical variants of the traditional multicast problem, namely, multicasting with partial destination set reachability and multicasting with traffic thinning. For both variants, we also provide optimal as well as heuristic solutions. Also, the paper presents a number of examples based on the exact and heuristic approaches" ] }
0804.3215
2166476129
a b s t r a c t Packet-switching WDM ring networks with a hotspot transporting unicast, multicast, and broadcast traffic are important components of high-speed metropolitan area networks. For an arbitrary multicast fanout traffic model with uniform, hotspot destination, and hotspot source packet traffic, we analyze the maximum achievable long-run average packet throughput, which we refer to as multicast capacity, of bi-directional shortest path routed WDM rings. We identify three segments that can experience the maximum utilization, and thus, limit the multicast capacity. We characterize the segment utilization probabilities through bounds and approximations, which we verify through simulations. We discover that shortest path routing can lead to utilization probabilities above one half for moderate to large portions of hotspot source multi- and broadcast traffic, and consequently multicast capacities of less than two simultaneous packet transmissions. We outline a one-copy routing strategy that guarantees a multicast capacity of at least two simultaneous packet transmissions for arbitrary hotspot source traffic.
Multicasting in circuit-switched WDM rings, which are fundamentally different from the packet-switched networks considered in this paper, has been extensively examined in the literature. The scheduling of connections and cost-effective design of bidirectional WDM rings was addressed, for instance in @cite_38 . Cost-effective traffic grooming approaches in WDM rings have been studied for instance in @cite_65 @cite_46 . The routing and wavelength assignment in reconfigurable bidirectional WDM rings with wavelength converters was examined in @cite_24 . The wavelength assignment for multicasting in circuit-switched WDM ring networks has been studied in @cite_68 @cite_20 @cite_6 @cite_64 @cite_1 @cite_62 . For unicast traffic, the throughputs achieved by different circuit-switched and packet-switched optical ring network architectures are compared in @cite_31 .
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_64", "@cite_62", "@cite_65", "@cite_1", "@cite_6", "@cite_24", "@cite_46", "@cite_68", "@cite_20" ], "mid": [ "2150891346", "2146345557", "", "2159958726", "2156816333", "2127600627", "", "2103213096", "", "2043777280", "2114119911" ], "abstract": [ "We consider the problem of scheduling all-to-all personalized connections (AAPC) in WDM rings. Scheduling one connection for every source-destination pair in a network of limited connectivity provides a way to reduce routing control and guarantee throughput. For a given number of wavelengths K and a given number of transceivers per node T, we first determine the lower bound (LB) on the schedule length, which depends on both K and T. To achieve the LB, either the network bandwidth, the I O capacity, or both should be fully utilized. This approach first constructs and then schedules circles, each of which is formed by up to four non-overlapping connections and can fully utilize the bandwidth of one wavelength. The proposed circle construction and scheduling algorithms can achieve the LB if K spl les T<N-1 or T=N-1, and closely approach or achieve the LB otherwise. In addition, we determine the appropriate values of T and K for the cost-effective designs in WDM rings through analysis of the schedule length and network throughput.", "The general architecture of a metro ring that interconnects IP routers is illustrated. The ring nodes could be: SONET SDH add drop multiplexers (ADMs), packet nodes, or WDM optical ADMs.", "", "Multicast communication involves transmitting information from a single source node to multiple destination nodes, and is becoming an important requirement in high-performance networks. We study multicast communication in a class of optical WDM networks with regular topologies such as linear arrays, rings, meshes, tori and hypercubes. For each type of network, we derive the necessary and sufficient conditions on the minimum number of wavelengths required for a WDM network to be wide-sense nonblocking for multicast communication under some commonly used routing algorithms.", "We provide network designs for optical add-drop wavelength-division-multiplexed (OADM) rings that minimize overall network cost, rather than just the number of wavelengths needed. The network cost includes the cost of the transceivers required at the nodes as well as the number of wavelengths. The transceiver cost includes the cost of terminating equipment as well as higher-layer electronic processing equipment, which in practice can dominate over the cost of the number of wavelengths in the network. The networks support dynamic (i.e., time-varying) traffic streams that are at lower rates (e.g., OC-3, 155 Mb s) than the lightpath capacities (e.g., OC-48, 2.5 Gb s). A simple OADM ring is the point-to-point ring, where traffic is transported on WDM links optically, but switched through nodes electronically. Although the network is efficient in using link bandwidth, it has high electronic and opto-electronic processing costs. Two OADM ring networks are given that have similar performance but are less expensive. Two other OADM ring networks are considered that are nonblocking, where one has a wide-sense nonblocking property and the other has a rearrangeably nonblocking property. All the networks are compared using the cost criteria of number of wavelengths and number of transceivers.", "We study the problem of wavelength assignment for multicast in order to maximize the network capacity in all-optical wavelength-division multiplexing networks. The motivation behind this work is to minimize the call blocking probability by maximizing the remaining network capacity after each wavelength assignment. While all previous studies on the same objective concentrate only on the unicast case, we study the problem for the multicast case. For a general multicast tree, we prove that the multicast wavelength assignment problem of maximizing the network capacity is NP-hard and propose two efficient greedy algorithms. We also study the same problem for a special network topology, a bidirectional ring network, which is practically the most important topology for optical networks. For bidirectional ring networks, a special multicast tree with at most two leaf nodes is constructed. Polynomial time algorithms for multicast wavelength assignment to maximize the network capacity exist under such a special multicast tree with regard to different splitting capabilities. Our work is the first effort to study the multicast wavelength assignment problem under the objective of maximizing network capacity.", "", "We consider the problem of wavelength assignment in reconfigurable WDM networks with wavelength converters. We show that for N-node P-port bidirectional rings, a minimum number of spl lceil PN 4 spl rceil wavelengths are required to support all possible connected virtual topologies in a rearrangeably nonblocking fashion, and provide an algorithm that meets this bound using no more than spl lceil PN 2 spl rceil wavelength converters. This improves over the tight lower bound of spl lceil PN 3 spl rceil wavelengths required for such rings given in if no wavelength conversion is available. We extend this to the general P-port case where each node i may have a different number of ports P sub i , and show that no more than spl lceil spl sigma sub i P sub i 4 spl rceil +1 wavelengths are required. We then provide a second algorithm that uses more wavelengths yet requires significantly fewer converters. We also develop a method that allows the wavelength converters to be arbitrarily located at any node in the ring. This gives significant flexibility in the design of the networks. For example, all spl lceil PN 2 spl rceil converters can be collocated at a single hub node, or distributed evenly among the N nodes with min spl lceil P 2 spl rceil +1,P converters at each node.", "", "The optimal multiple multicast problem (OMMP) on wavelength division multiplexing ring networks without wavelength conversion is considered in this paper. When the physical network and the set of multicast requests are given, OMMP is the problem that selects a suitable path or (paths) and wavelength (or wavelengths) among the many possible choices for each multicast request under the constraint that not any paths using the same wavelength pass through the same link such that the number of used wavelengths is minimized. This problem can be proven to be NP-hard. In the paper, a formulation of OMMP is given and several genetic algorithms (GAs) are proposed to solve it. Experimental results indicate that the proposed GAs are robust for this problem.", "In this paper, we consider the problem of multicasting with multiple originators in WDM optical networks. In this problem, we are given a set of S source nodes and a set D of destination nodes in a network. All source nodes are capable of providing data to any destination node. Our objective is to find a virtual topology in the WDM network which satisfies given constraints on available resources and is optimal with respect to minimizing the maximum hop distance. Although the corresponding decision problem is NP-complete in general, we give polynomial time algorithms for the cases of unidirectional paths and rings." ] }
0804.3215
2166476129
a b s t r a c t Packet-switching WDM ring networks with a hotspot transporting unicast, multicast, and broadcast traffic are important components of high-speed metropolitan area networks. For an arbitrary multicast fanout traffic model with uniform, hotspot destination, and hotspot source packet traffic, we analyze the maximum achievable long-run average packet throughput, which we refer to as multicast capacity, of bi-directional shortest path routed WDM rings. We identify three segments that can experience the maximum utilization, and thus, limit the multicast capacity. We characterize the segment utilization probabilities through bounds and approximations, which we verify through simulations. We discover that shortest path routing can lead to utilization probabilities above one half for moderate to large portions of hotspot source multi- and broadcast traffic, and consequently multicast capacities of less than two simultaneous packet transmissions. We outline a one-copy routing strategy that guarantees a multicast capacity of at least two simultaneous packet transmissions for arbitrary hotspot source traffic.
Studies of non-uniform traffic in optical networks have generally focused on issues arising in circuit-switched optical networks, see for instance @cite_21 @cite_2 @cite_42 @cite_25 @cite_13 @cite_22 @cite_69 . A comparison of circuit-switching to optical burst switching network technologies, including a brief comparison for non-uniform traffic, was conducted in @cite_55 . The throughput characteristics of a mesh network interconnecting routers on an optical ring through fiber shortcuts for non-uniform unicast traffic were examined in @cite_70 . The study @cite_34 considered the throughput characteristics of a ring network with uniform unicast traffic, where the nodes may adjust their send probabilities in a non-uniform manner. The multicast capacity of a single-wavelength packet-switched ring with non-uniform traffic was examined in @cite_54 . In contrast to these works, we consider non-uniform traffic with an arbitrary fanout, which accommodates a wide range of unicast, multicast, and broadcast traffic mixes, in a WDM ring network.
{ "cite_N": [ "@cite_69", "@cite_22", "@cite_70", "@cite_55", "@cite_21", "@cite_42", "@cite_54", "@cite_2", "@cite_34", "@cite_13", "@cite_25" ], "mid": [ "2114465585", "2031724810", "2151846633", "2095945182", "2155589525", "", "2135114137", "", "2140788012", "2127388041", "" ], "abstract": [ "In high-speed SONET rings with point-to-point WDM links, the cost of SONET add-drop multiplexers (S-ADMs) can be dominantly high. However, by grooming traffic (i.e., multiplexing lower-rate streams) appropriately and using wavelength ADMs (WADMs), the number of S-ADMs can be dramatically reduced. In this paper, we propose optimal or near-optimal algorithms for traffic grooming and wavelength assignment to reduce both the number of wavelengths and the number of S-ADMs. The algorithms proposed are generic in that they can be applied to both unidirectional and bidirectional rings having an arbitrary number of nodes under both uniform and nonuniform (i.e., arbitrary) traffic with an arbitrary grooming factor. Some lower bounds on the number of wavelengths and S-ADMs required for a given traffic pattern are derived, and used to determine the optimality of the proposed algorithms. Our study shows that using the proposed algorithms, these lower bounds can he closely approached in most cases or even achieved in some cases. In addition, even when using a minimum number of wavelengths, the savings in S-ADMs due to traffic grooming (and the use of WADMs) are significant, especially for large networks.", "A new and tight lower bound on the number of ADMs with arbitrary nonuniform traffic demands in a SONET WDM ring network is derived. Simulations show that this lower bound is much tighter than the previous one and for some cases it reaches the infimum.", "We study the SMARTNet (scalable multichannel adaptable ring terabit network), an optical wavelength-division packet-switching meshed-ring network employing wavelength routers. We investigate the ways to configure these wavelength routers in adapting to the underlying traffic matrix. One heuristic algorithm for this purpose is developed. We show that for any traffic matrix, the throughput performance of the SMARTNet employing wavelength cross-connect routing devices is upper bounded by that of a corresponding meshed-ring network employing store-and-forward switching devices. The difference diminishes as the number of employed wavelengths increases. Furthermore, for a specific meshed-ring: topology, we show that the above upper bound throughput level is achievable under uniform traffic loading at even a small number of wavelengths. Under nonuniform traffic loading, we show that a throughput level equal to 80 to 90 of the upper bound value can be achieved with a small number (five to seven) of wavelengths.", "This paper investigates the challenges for developing the current local area network (LAN)-based Ethernet protocol into a technology for future network architectures that is capable of satisfying dynamic traffic demands with hard service guarantees using high-bit-rate channels (80...100 Gb s). The objective is to combine high-speed optical transmission and physical interfaces (PHY) with a medium access control (MAC) protocol, designed to meet the service guarantees in future metropolitan-area networks (MANs). Ethernet is an ideal candidate for the extension into the MAN as it allows seamless compatibility with the majority of existing LANs. The proposed extension of the MAC protocol focuses on backward compatibility as well as on the exploitation of the wavelength domain for routing of variable traffic demands. The high bit rates envisaged will easily exhaust the capacity of a single optical fiber in the C band and will require network algorithms optimizing the reuse of wavelength resources. To investigate this, four different static and dynamic optical architectures were studied that potentially offer advantages over current link-based designs. Both analytical and numerical modeling techniques were applied to quantify and compare the network performance for all architectures in terms of achievable throughput, delay, and the number of required wavelengths and to investigate the impact of nonuniform traffic demands. The results show that significant resource savings can be achieved by using end-to-end dynamic lightpath allocation, but at the expense of high delay.", "In this paper, we study the benefits of using tunable transceivers for reducing the required number of electronic ports in wavelength-division-multiplexing time-division multiplexing optical networks. We show that such transceivers can be used to efficiently \"groom\" subwavelength traffic in the optical domain and so can significantly reduce the amount of terminal equipment needed compared with the fixed-tuned case. Formulations for this \"tunable grooming\" problem are provided, where the objective is to schedule transceivers so as to minimize the required number of ports needed for a given traffic demand. We establish a relationship between this problem and edge colorings of graphs which are determined by the offered traffic. Using this relationship, we show that, in general, this problem is NP-complete, but we are able to efficiently solve it for many cases of interest. When the number of wavelengths in the network is not limited, each node is shown to only require the minimum number of transceivers (i.e., no more transceivers than the amount of traffic that it generates). This holds regardless of the network topology or traffic pattern. When the number of wavelengths is limited, an analogous result is shown for both uniform and hub traffic in a ring. We then develop a heuristic algorithm for general traffic that uses nearly the minimum number of transceivers. In most cases, tunable transceivers are shown to reduce the number of ports per node by as much as 60 . We also consider the case where traffic can dynamically change among an allowable set of traffic demands. Tunability is again shown to significantly reduce the port requirement for a nonblocking ring, both with and without rearrangements.", "", "Hotspot traffic is common in metro ring networks connecting access networks with backbone networks, and these metro rings are also expected to support a mix of unicast, multicast, and broadcast traffic. Shortest path (SP) routing, as employed in the IEEE 802.17 Resilient Packet Ring (RPR), is widely considered for metro rings as it maximizes spatial reuse and, thus, the achievable packet throughput (capacity) for uniform traffic. In this paper, we analyze the capacity of bidirectional optical ring networks, such as RPR, employing SP routing for multicast (nonuniform) hotspot traffic (whereby unicast and broadcast are considered as special cases of multicast). We find that, when the traffic originating at the hotspot exceeds a critical threshold, then SP routing leads to substantial reductions in capacity to a value close to one simultaneous packet transmission. To overcome this limitation of SP routing, we propose a simple combined SP one-copy routing Hotspot traffic is common in metro ring networks connecting access networks with backbone networks, and these metro rings are also expected to support a mix of unicast, multicast, and broadcast traffic. Shortest path (SP) routing, as employed in the IEEE 802.17 resilient packet ring (RPR), is widely considered for metro rings as it maximizes spatial reuse and, thus, the achievable packet throughput (capacity) for uniform traffic. In this paper, we analyze the capacity of bidirectional optical ring networks, such as RPR, employing SP routing for multicast (nonuniform) hotspot traffic (whereby unicast and broadcast are considered as special cases of multicast). We find that, when the traffic originating at the hotspot exceeds a critical threshold, then SP routing leads to substantial reductions in capacity to a value close to one simultaneous packet transmission. To overcome this limitation of SP routing, we propose a simple combined SP one-copy routing strategy that provides a capacity of at least two simultaneous packet transmissions.strategy that provides a capacity of at least two simultaneous packet transmissions.", "", "In this paper we present an analytical model for the evaluation of access delays and bandwidth efficiency in wavelength-division multiplexing slotted-ring networks where nodes are equipped with one fixed transmitter and several fixed receivers. The system has been analyzed under uniform and non-uniform traffic patterns, for both source and destination stripping schemes. Closed-form and exact expressions for bandwidth efficiency and access blocking probability were obtained. The model is validated by simulations.", "Traffic grooming is the term used to describe how different traffic streams are packed into higher speed streams. In a synchronous optical network-wavelength division multiplexing (SONET-WDM) ring network, each wavelength can carry several lower-rate traffic streams in time division (TDM) fashion. The traffic demand, which is an integer multiple of the timeslot capacity, between any two nodes is established on several TDM virtual connections. A virtual connection needs to be added and dropped only at the two end nodes of the connection; as a result, the electronic add-drop multiplexers (ADMs) at intermediate nodes (if there are any) will electronically bypass this timeslot. Instead of having an ADM on every wavelength at every node, it may be possible to have some nodes on some wavelength where no add-drop is needed on any timeslot; thus, the total number of ADMs in the network (and, hence, the network cost) can be reduced. Under the static traffic pattern, the savings can be maximized by carefully packing the virtual connections into wavelengths. In this work, we allow arbitrary (nonuniform) traffic and we present a formal mathematical definition of the problem, which turns out to be an integer linear program (ILP). Then, we propose a simulated-annealing-based heuristic algorithm for the case where all the traffic is carried on directly connected virtual connections (referred to as the single-hop case). Next, we study the case where a hub node is used to bridge traffic from different wavelengths (referred to as the multihop case). We find the following main results. The simulated-annealing-based approach has been found to achieve the best results, so far, in most cases, relative to other comparable approaches proposed in the literature. In general, a multihop approach can achieve better equipment savings when the traffic-grooming ratio is large, but it consumes more bandwidth.", "" ] }
0804.3599
2950669208
We present an approach to improving the precision of an initial document ranking wherein we utilize cluster information within a graph-based framework. The main idea is to perform re-ranking based on centrality within bipartite graphs of documents (on one side) and clusters (on the other side), on the premise that these are mutually reinforcing entities. Links between entities are created via consideration of language models induced from them. We find that our cluster-document graphs give rise to much better retrieval performance than previously proposed document-only graphs do. For example, authority-based re-ranking of documents via a HITS-style cluster-based approach outperforms a previously-proposed PageRank-inspired algorithm applied to solely-document graphs. Moreover, we also show that computing authority scores for clusters constitutes an effective method for identifying clusters containing a large percentage of relevant documents.
Recently, there has been a growing body of work on graph-based modeling for different language-processing tasks where -in links are induced by inter-entity textual similarities. Examples include document (re-)ranking @cite_17 @cite_0 @cite_31 @cite_10 @cite_32 , text summarization @cite_21 @cite_37 , sentence retrieval @cite_1 , and document representation @cite_30 . In contrast to our methods, links connect entities of the same type, and clusters of entities are not modeled within the graphs.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_21", "@cite_1", "@cite_32", "@cite_0", "@cite_31", "@cite_10", "@cite_17" ], "mid": [ "1992795877", "1525595230", "2110693578", "2018557178", "", "1501307387", "2158201212", "2950763224", "1975998118" ], "abstract": [ "We propose a new document vector representation specifically designed for the document clustering task. Instead of the traditional term-based vectors, a document is represented as an n-dimensional vector, where n is the number of documents in the cluster. The value at each dimension of the vector is closely related to the generation probability based on the language model of the corresponding document. Inspired by the recent graph-based NLP methods, we reinforce the generation probabilities by iterating random walks on the underlying graph representation. Experiments with k-means and hierarchical clustering algorithms show significant improvements over the alternative tf·idf vector representation.", "In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications.", "We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.", "We consider the problem of question-focused sentence retrieval from complex news articles describing multi-event stories published over time. Annotators generated a list of questions central to understanding each story in our corpus. Because of the dynamic nature of the stories, many questions are time-sensitive (e.g. \"How many victims have been found?\") Judges found sentences providing an answer to each question. To address the sentence retrieval problem, we apply a stochastic, graph-based method for comparing the relative importance of the textual units, which was previously used successfully for generic summarization. Currently, we present a topic-sensitive version of our method and hypothesize that it can outperform a competitive baseline, which compares the similarity of each sentence to the input question via IDF-weighted word overlap. In our experiments, the method achieves a TRDR score that is significantly higher than that of the baseline.", "", "The University of Chicago participated in the Cross-Language Evaluation Forum 2004 (CLEF2004) cross-language multilingual, bilingual, and spoken language tracks. Cross-language experiments focused on meeting the challenges of new languages with freely available resources. We found that modest effectiveness could be achieved with the additional application of pseudo-relevance feedback to overcome some gaps in impoverished lexical resources. Experiments with a new dimensionality reduction approach for re-ranking of retrieved results yielded no improvement, however. Finally, spoken document retrieval experiments aimed to meet the challenges of unknown story boundary conditions and noisy retrieval through query-based merger of fine-grained overlapping windows and pseudo-feedback query expansion to enhance retrieval.", "The cluster hypothesis states: closely related documents tend to be relevant to the same request. We exploit this hypothesis directly by adjusting ad hoc retrieval scores from an initial retrieval so that topically related documents receive similar scores. We refer to this process as score regularization. Score regularization can be presented as an optimization problem, allowing the use of results from semi-supervised learning. We demonstrate that regularized scores consistently and significantly rank documents better than unregularized scores, given a variety of initial retrieval algorithms. We evaluate our method on two large corpora across a substantial number of topics.", "Inspired by the PageRank and HITS (hubs and authorities) algorithms for Web search, we propose a structural re-ranking approach to ad hoc information retrieval: we reorder the documents in an initially retrieved set by exploiting asymmetric relationships between them. Specifically, we consider generation links, which indicate that the language model induced from one document assigns high probability to the text of another; in doing so, we take care to prevent bias against long documents. We study a number of re-ranking criteria based on measures of centrality in the graphs formed by generation links, and show that integrating centrality into standard language-model-based retrieval is quite effective at improving precision at top ranks.", "Abstract One of the most important problems in information retrieval is determining the order of documents in the answer returned to the user. Many methods and algorithms for document ordering have been proposed. The method introduced in this paper differs from them especially in that it uses a probabilistic model of a document set. In this model documents are regarded as states of a Markov chain, where transition probabilities are directly proportional to similarities between documents. Steady-state probabilities reflect similarities of particular documents to the whole answer set. If documents are ordered according to these probabilities, at the top of a list there will be documents that are the best representatives of the set, and at the bottom those which are the worst representatives. The method was tested against databases INSPEC and Networked Computer Science Technical Reference Library (NCSTRL). Test results are positive. Values of the Kendall rank correlation coefficient indicate high similarity between rankings generated by the proposed method and rankings produced by experts. Results are comparable with rankings generated by the vector model using standard weighting schema tf·idf." ] }
0804.3599
2950669208
We present an approach to improving the precision of an initial document ranking wherein we utilize cluster information within a graph-based framework. The main idea is to perform re-ranking based on centrality within bipartite graphs of documents (on one side) and clusters (on the other side), on the premise that these are mutually reinforcing entities. Links between entities are created via consideration of language models induced from them. We find that our cluster-document graphs give rise to much better retrieval performance than previously proposed document-only graphs do. For example, authority-based re-ranking of documents via a HITS-style cluster-based approach outperforms a previously-proposed PageRank-inspired algorithm applied to solely-document graphs. Moreover, we also show that computing authority scores for clusters constitutes an effective method for identifying clusters containing a large percentage of relevant documents.
While ideas similar to ours by virtue of leveraging the mutual reinforcement of entities of different types, or using bipartite graphs of such entities for clustering (rather than using clusters), are abundant (e.g., @cite_38 @cite_9 @cite_33 ), we focus here on exploiting mutual reinforcement in ad hoc retrieval.
{ "cite_N": [ "@cite_38", "@cite_9", "@cite_33" ], "mid": [ "2110891728", "2133576408", "1972645849" ], "abstract": [ "We describe a method for automatic word sense disambiguation using a text corpus and a machine-readble dictionary (MRD). The method is based on word similarity and context similarity measures. Words are considered similar if they appear in similar contexts; contexts are similar if they contain similar words. The circularity of this definition is resolved by an iterative, converging process, in which the system learns from the corpus a set of typical usages for each of the senses of the polysemous word listed in the MRD. A new instance of a polysemous word is assigned the sense associated with the typical usage most similar to its context. Experiments show that this method can learn even from very sparse training data, achieving over 92 correct disambiguation performance.", "Both document clustering and word clustering are well studied problems. Most existing algorithms cluster documents and words separately but not simultaneously. In this paper we present the novel idea of modeling the document collection as a bipartite graph between documents and words, using which the simultaneous clustering problem can be posed as a bipartite graph partitioning problem. To solve the partitioning problem, we use a new spectral co-clustering algorithm that uses the second left and right singular vectors of an appropriately scaled word-document matrix to yield good bipartitionings. The spectral algorithm enjoys some optimality properties; it can be shown that the singular vectors solve a real relaxation to the NP-complete graph bipartitioning problem. We present experimental results to verify that the resulting co-clustering algorithm works well in practice.", "!# @math ) 4 ' \" + @math 4 ) @ & A B C D E . * . ! F ' G H ' IJ K L * M & + *) N z'†ˆ‡cC ‰g) ' & Š p : q : ‹ r ŒHt†d|#t9 ,s tvuŽw x yFz €|;w]tS 9 x |;  ‡ C n+ U <dU' 4 2" ] }
0804.3599
2950669208
We present an approach to improving the precision of an initial document ranking wherein we utilize cluster information within a graph-based framework. The main idea is to perform re-ranking based on centrality within bipartite graphs of documents (on one side) and clusters (on the other side), on the premise that these are mutually reinforcing entities. Links between entities are created via consideration of language models induced from them. We find that our cluster-document graphs give rise to much better retrieval performance than previously proposed document-only graphs do. For example, authority-based re-ranking of documents via a HITS-style cluster-based approach outperforms a previously-proposed PageRank-inspired algorithm applied to solely-document graphs. Moreover, we also show that computing authority scores for clusters constitutes an effective method for identifying clusters containing a large percentage of relevant documents.
Random walks (with early stopping) over bipartite graphs of terms and documents were used for query expansion @cite_6 , but in contrast to our work, no stationary solution was sought. A similar short chain'' approach utilizing bipartite graphs of clusters and documents for ranking an entire corpus was recently proposed @cite_26 , thereby constituting the work most resembling ours. However, again, a stationary distribution was not sought. Also, query drift prevention mechanisms were required to obtain good performance; in our setting, we need not employ such mechanisms.
{ "cite_N": [ "@cite_26", "@cite_6" ], "mid": [ "2949069903", "2068905009" ], "abstract": [ "We present a novel approach to pseudo-feedback-based ad hoc retrieval that uses language models induced from both documents and clusters. First, we treat the pseudo-feedback documents produced in response to the original query as a set of pseudo-queries that themselves can serve as input to the retrieval process. Observing that the documents returned in response to the pseudo-queries can then act as pseudo-queries for subsequent rounds, we arrive at a formulation of pseudo-query-based retrieval as an iterative process. Experiments show that several concrete instantiations of this idea, when applied in conjunction with techniques designed to heighten precision, yield performance results rivaling those of a number of previously-proposed algorithms, including the standard language-modeling approach. The use of cluster-based language models is a key contributing factor to our algorithms' success.", "We present a framework for information retrieval that combines document models and query models using a probabilistic ranking function based on Bayesian decision theory. The framework suggests an operational retrieval model that extends recent developments in the language modeling approach to information retrieval. A language model for each document is estimated, as well as a language model for each query, and the retrieval problem is cast in terms of risk minimization. The query language model can be exploited to model user preferences, the context of a query, synonomy and word senses. While recent work has incorporated word translation models for this purpose, we introduce a new method using Markov chains defined on a set of documents to estimate the query models. The Markov chain method has connections to algorithms from link analysis and social networks. The new approach is evaluated on TREC collections and compared to the basic language modeling approach and vector space models together with query expansion using Rocchio. Significant improvements are obtained over standard query expansion methods for strong baseline TF-IDF systems, with the greatest improvements attained for short queries on Web data." ] }
0804.3784
2132489970
We study the graph constructed on a Poisson point process in @math dimensions by connecting each point to the @math points nearest to it. This graph a.s. has an infinite cluster if @math where @math , known as the critical value, depends only on the dimension @math . This paper presents an improved upper bound of 188 on the value of @math . We also show that if @math the infinite cluster of @math has an infinite subset of points with the property that the distance along the edges of the graphs between these points is at most a constant multiplicative factor larger than their Euclidean distance. Finally we discuss in detail the relevance of our results to the study of multi-hop wireless sensor networks.
The study of random graphs obtained by applying connection rules on stationary point processes is known as continuum percolation. Meester and Roy's monograph on the subject provides an excellent view of the deep theory that has been developed around this general setting @cite_11 . The @math model was introduced by H "aggstr "om and Meester @cite_13 . They showed that there was a finite critical value, @math for all @math such that an infinite cluster exists in this model. They proved that the infinite cluster was unique and that there was a value @math such that @math for all @math . Teng and Yao gave an upper bound of 213 for @math @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2035452268", "2015859644", "" ], "abstract": [ "Let P be a realization of a homogeneous Poisson point process in ℝd with density 1. We prove that there exists a constant kd, 1<kd<∞, such that the k-nearest neighborhood graph of P has an infinite connected component with probability 1 when k≥kd. In particular, we prove that k2≤213. Our analysis establishes and exploits a close connection between the k-nearest neighborhood graphs of a Poisson point set and classical percolation theory. We give simulation results which suggest k2=3. We also obtain similar results for finite random point sets.", "Consider a Poisson process X in Rd with density 1. We connect each point of X to its k nearest neighbors by undirected edges. The number k is the parameter in this model. We show that, for k = 1, no percolation occurs in any dimension, while, for k = 2, percolation occurs when the dimension is sufficiently large. We also show that if percolation occurs, then there is exactly one infinite cluster. Another percolation model is obtained by putting balls of radius zero around each point of X and let the radii grow linearly in time until they hit another ball. We show that this model exists and that there is no percolation in the limiting configuration. Finally we discuss some general properties of percolation models where balls placed at Poisson points are not allowed to overlap (but are allowed to be tangent). 0 1996 John Wiley & Sons, Inc.", "" ] }
0804.3784
2132489970
We study the graph constructed on a Poisson point process in @math dimensions by connecting each point to the @math points nearest to it. This graph a.s. has an infinite cluster if @math where @math , known as the critical value, depends only on the dimension @math . This paper presents an improved upper bound of 188 on the value of @math . We also show that if @math the infinite cluster of @math has an infinite subset of points with the property that the distance along the edges of the graphs between these points is at most a constant multiplicative factor larger than their Euclidean distance. Finally we discuss in detail the relevance of our results to the study of multi-hop wireless sensor networks.
Eppstein, Paterson and Yao @cite_5 studied @math -nearest neighbour graphs on random point sets in two dimensions in some detail and proved interesting bounds showing that the number of points in a component of depth @math was polynomial in @math when @math was 1 and exponential in @math when it was 2 or greater. Their primary interest was in obtaining low dilation embeddings of nearest-neighbor graphs.
{ "cite_N": [ "@cite_5" ], "mid": [ "2072596762" ], "abstract": [ "The \"nearest-neighbor\" relation, or more generally the \"k-nearest-neighbors\" relation, defined for a set of points in a metric space, has found many uses in computational geometry and clustering analysis, yet surprisingly little is known about some of its basic properties. In this paper we consider some natural questions that are motivated by geometric embedding problems. We derive bounds on the relationship between size and depth for the components of a nearest-neighbor graph and prove some probabilistic properties of the k-nearest-neighbors graph for a random set of points." ] }
0804.3784
2132489970
We study the graph constructed on a Poisson point process in @math dimensions by connecting each point to the @math points nearest to it. This graph a.s. has an infinite cluster if @math where @math , known as the critical value, depends only on the dimension @math . This paper presents an improved upper bound of 188 on the value of @math . We also show that if @math the infinite cluster of @math has an infinite subset of points with the property that the distance along the edges of the graphs between these points is at most a constant multiplicative factor larger than their Euclidean distance. Finally we discuss in detail the relevance of our results to the study of multi-hop wireless sensor networks.
Algorithms for searching for nearest neighbors (see e.g. @cite_16 @cite_22 ) and constructing nearest neighbor graphs efficiently have also received a lot of attention (see e.g. @cite_14 ). However these are not directly related so we do not survey this literature in detail.
{ "cite_N": [ "@cite_14", "@cite_16", "@cite_22" ], "mid": [ "1550211845", "139098497", "1993216012" ], "abstract": [ "Let @math be a set of elements and d a distance function defined among them. Let NNk(u) be the k elements in @math having the smallest distance to u. The k-nearest neighbor graph (knng) is a weighted directed graph @math such that E= (u,v), v∈NNk(u) . Several knng construction algorithms are known, but they are not suitable to general metric spaces. We present a general methodology to construct knngs that exploits several features of metric spaces. Experiments suggest that it yields costs of the form c1n1.27 distance computations for low and medium dimensional spaces, and c2n1.90 for high dimensional ones.", "Given a set S of points in a metric space with distance function D, the nearest-neighbor searching problem is to build a data structure for S so that for an input query point q, the point s 2 S that minimizes D(s,q) can be found quickly. We survey approaches to this problem, and its relation to concepts of metric space dimension. Several measures of dimension can be estimated using nearest-neighbor searching, while others can be used to estimate the cost of that searching. In recent years, several data structures have been proposed that are provably good for low-dimensional spaces, for some particular measures of dimension. These and other data structures for nearest-neighbor searching are surveyed.", "Given a setV ofn points ink-dimensional space, and anLq-metric (Minkowski metric), the all-nearest-neighbors problem is defined as follows: for each pointp inV, find all those points inV? p that are closest top under the distance metricLq. We give anO(n logn) algorithm for the all-nearest-neighbors problem, for fixed dimensionk and fixed metricLq. Since there is an ?(n logn) lower bound, in the algebraic decision-tree model of computation, on the time complexity of any algorithm that solves the all-nearest-neighbors problem (fork=1), the running time of our algorithm is optimal up to a constant factor." ] }
0804.2095
2952659948
Logic Programming languages and combinational circuit synthesis tools share a common "combinatorial search over logic formulae" background. This paper attempts to reconnect the two fields with a fresh look at Prolog encodings for the combinatorial objects involved in circuit synthesis. While benefiting from Prolog's fast unification algorithm and built-in backtracking mechanism, efficiency of our search algorithm is ensured by using parallel bitstring operations together with logic variable equality propagation, as a mapping mechanism from primary inputs to the leaves of candidate Leaf-DAGs implementing a combinational circuit specification. After an exhaustive expressiveness comparison of various minimal libraries, a surprising first-runner, Strict Boolean Inequality "<" together with constant function "1" also turns out to have small transistor-count implementations, competitive to NAND-only or NOR-only libraries. As a practical outcome, a more realistic circuit synthesizer is implemented that combines rewriting-based simplification of (<,1) circuits with exhaustive Leaf-DAG circuit search. Keywords: logic programming and circuit design, combinatorial object generation, exact combinational circuit synthesis, universal boolean logic libraries, symbolic rewriting, minimal transistor-count circuit synthesis
Rewriting simplification has been used in various forms in recent work on multi-level synthesis @cite_4 @cite_12 using non-SOP encodings ranging from And-Inverter Gates (AIGs) and XOR-AND nets to graph-based representations in the tradition of @cite_0 . Interestingly, new synthesis targets, ranging from AIGs to cyclic combinational circuits @cite_13 , turned out to be competitive with more traditional minimization based synthesis techniques. Synthesis of reversible circuits with possible uses in low-power adiabatic computing and quantum computing @cite_3 have emerged. Despite its super-exponential complexity, exact circuit synthesis efforts have been reported successful for increasingly large circuits @cite_10 @cite_2 . While @cite_8 describes the basics of CMOS technology, we refer the reader interested in full background information for our transistor models to @cite_9 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_10", "@cite_9", "@cite_3", "@cite_0", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "", "", "", "2157024459", "2146532505", "2080267935", "2752853835", "2125850227", "191551858" ], "abstract": [ "", "", "", "(NOTE: Each chapter begins with an Introduction and concludes with a Summary, To Probe Further, and Exercises and Design Problems.) I. THE FABRICS. 1. Introduction. A Historical Perspective. Issues in Digital Integrated Circuit Design. Quality Metrics of a Digital Design. 2. The Manufacturing Process. The CMOS Manufacturing Process. Design Rules-The Contract between Designer and Process Engineer. Packaging Integrated Circuits. Perspective-Trends in Process Technology. 3. The Devices. The Diode. The MOS(FET) Transistor. A Word on Process Variations. Perspective: Technology Scaling. 4. The Wire. A First Glance. Interconnect Parameters-Capitance, Resistance, and Inductance. Electrical Wire Models. SPICE Wire Models. Perspective: A Look into the Future. II. A CIRCUIT PERSPECTIVE. 5. The CMOS Inverter. The Static CMOS Inverter-An Intuitive Perspective. Evaluating the Robustness of the CMOS Inverter: The Static Behavior. Performance of CMOS Inverter: The Dynamic Behavior. Power, Energy, and Energy-Delay. Perspective: Technology Scaling and Its Impact on the Inverter Metrics. 6. Designing Combinational Logic Gates in CMOS. Static CMOS Design. Dynamic CMOS Design. How to Choose a Logic Style? Perspective: Gate Design in the Ultra Deep-Submicron Era. 7. Designing Sequential Logic Circuits. Timing Metrics for Sequential Circuits. Classification of Memory Elements. Static Latches and Registers. Dynamic Latches and Registers. Pulse Registers. Sense-Amplifier Based Registers. Pipelining: An Approach to Optimize Sequential Circuits. Non-Bistable Sequential Circuits. Perspective: Choosing a Clocking Strategy. III. A SYSTEM PERSPECTIVE. 8. Implementation Strategies for Digital ICS. From Custom to Semicustom and Structured-Array Design Approaches. Custom Circuit Design. Cell-Based Design Methodology. Array-Based Implementation Approaches. Perspective-The Implementation Platform of the Future. 9. Coping with Interconnect. Capacitive Parasitics. Resistive Parasitics. Inductive Parasitics. Advanced Interconnect Techniques. Perspective: Networks-on-a-Chip. 10. Timing Issues in Digital Circuits. Timing Classification of Digital Systems. Synchronous Design-An In-Depth Perspective. Self-Timed Circuit Design. Synchronizers and Arbiters. Clock Synthesis and Synchronization Using a Phased-Locked Loop. Future Directions and Perspectives. 11. Designing Arithmetic Building Blocks. Datapaths in Digital Processor Architectures. The Adder. The Multiplier. The Shifter. Other Arithmetic Operators. Power and Spped Trade-Offs in Datapath Structures. Perspective: Design as a Trade-off. 12. Designing Memory and Array Structures. The Memory Core. Memory Peripheral Circuitry. Memory Reliability and Yield. Power Dissipation in Memories. Case Studies in Memory Design. Perspective: Semiconductor Memory Trends and Evolutions. Problem Solutions. Index.", "Reversible or information-lossless circuits have applications in digital signal processing, communication, computer graphics, and cryptography. They are also a fundamental requirement in the emerging field of quantum computation. We investigate the synthesis of reversible circuits that employ a minimum number of gates and contain no redundant input-output line-pairs (temporary storage channels). We prove constructively that every even permutation can be implemented without temporary storage using NOT, CNOT, and TOFFOLI gates. We describe an algorithm for the synthesis of optimal circuits and study the reversible functions on three wires, reporting the distribution of circuit sizes. We also study canonical circuit decompositions where gates of the same kind are grouped together. Finally, in an application important to quantum computing, we synthesize oracle circuits for Grover's search algorithm, and show a significant improvement over a previously proposed synthesis algorithm.", "In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.", "A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid met al cooled fast breeder reactors.", "A collection of logic gates forms a combinational circuit if the outputs can be described as Boolean functions of the current input values only. Optimizing combinational circuitry, for instance, by reducing the number of gates (the area) or by reducing the length of the signal paths (the delay), is an overriding concern in the design of digital integrated circuits. The accepted wisdom is that combinational circuits must have acyclic (i.e., loop-free or feed-forward) topologies. In fact, the idea that “combinational” and “acyclic” are synonymous terms is so thoroughly ingrained that many textbooks provide the latter as a definition of the former. And yet simple examples suggest that this is incorrect. In this dissertation, we advocate the design of cyclic combinational circuits (i.e., circuits with loops or feedback paths). We demonstrate that circuits can be optimized effectively for area and for delay by introducing cycles. On the theoretical front, we discuss lower bounds and we show that certain cyclic circuits are one-half the size of the best possible equivalent a cyclic implementations. On the practical front, we describe an efficient approach for analyzing cyclic circuits, and we provide a general framework for synthesizing such circuits. On trials with industry-accepted benchmark circuits, we obtained significant improvements in area and delay in nearly all cases. Based on these results, we suggest that it is time to re-write the definition: combinational might well mean cyclic.", "The problem of encoding arises in several areas of logic synthesis. Due to the nature of this problem, it is often difficult to systematically explore the space of all feasible encodings in order to find an optimal one. In this paper, we show that when the objects to be encoded are Boolean functions, it is possible to formulate and solve the problem optimally. We present a general approach to the encoding problem with one or more code-bit functions having some desirable properties. The method allows for an efficient implementation using branch-and-bound procedure coupled with specialized BDD operators. The proposed approach was used to synthesize look-up table (LUT) cascades implementing Boolean functions. Experimental results show that it finds optimal solutions for complex encoding problems in less than a second of CPU time." ] }
0804.0273
1782352263
We consider the problem of intruder deduction in security protocol analysis: that is, deciding whether a given message M can be deduced from a set of messages Γ under the theory of blind signatures and arbitrary convergent equational theories modulo associativity and commutativity (AC) of certain binary operators. The traditional formulations of intruder deduction are usually given in natural-deduction-like systems and proving decidability requires significant effort in showing that the rules are "local" in some sense. By using the well-known translation between natural deduction and sequent calculus, we recast the intruder deduction problem as proof search in sequent calculus, in which locality is immediate. Using standard proof theoretic methods, such as permutability of rules and cut elimination, we show that the intruder deduction problem can be reduced, in polynomial time, to the elementary deduction problems, which amounts to solving certain equations in the underlying individual equational theories. We further show that this result extends to combinations of disjoint AC-convergent theories whereby the decidability of intruder deduction under the combined theory reduces to the decidability of elementary deduction in each constituent theory. Although various researchers have reported similar results for individual cases, our work shows that these results can be obtained using a systematic and uniform methodology based on the sequent calculus.
There are several existing works in the literature that deal with intruder deduction. Our work is more closely related to, e.g., @cite_13 @cite_7 @cite_3 , in that we do not have explicit destructors (projection, decryption, unblinding), than, say, @cite_16 @cite_1 . In the latter work, these destructors are considered part of the equational theory, so in this sense our work slightly extends theirs to allow combinations of explicit and implicit destructors. A drawback for the approach with explicit destructors is that one needs to consider these destructors together with other algebraic properties in proving decidability, although recent work in combining decidable theories @cite_5 allows one to deal with them modularly. Combination of intruder theories has been considered in @cite_17 @cite_5 @cite_8 , as part of their solution to a more difficult problem of deducibility constraints which assumes active intruders. In particular, Delaune, et. al., @cite_8 obtain results similar to what we have here concerning combination of AC theories. One difference between these works and ours is in how this combination is derived. Their approach is more algorithmic whereas our result is obtained through analysis of proof systems.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_1", "@cite_3", "@cite_5", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2081719098", "2046270839", "1498742226", "1992375708", "1810931524", "2060349224", "2130123502", "1578151293" ], "abstract": [ "We present complexity results for the verification of security protocols. Since the perfect cryptography assumption is unrealistic for cryptographic primitives with visible algebraic properties, we extend the classical Dolev-Yao model by permitting the intruder to exploit these properties. More precisely, we are interested in theories such as Exclusive or and Abelian groups in combination with the homomorphism axiom. We show that the intruder deduction problem is in PTIME in both cases, improving the EXPTIME complexity results of Lafourcade, Lugiez and Treinen. iez, R. Treinen, Intruder deduction for AC-like equational theories with homomorphisms, in: Proc. 16th Internat. Conf. Rewriting Techniques and Applications (RTA'05), Nara, Japan, Lecture Notes in Comput. Sci., vol. 3467, Springer, Berlin, 2005, pp. 308-322].", "We are interested in the design of automated procedures for analyzing the (in)security of cryptographic protocols in the Dolev-Yao model for a bounded number of sessions when we take into account some algebraic properties satisfied by the operators involved in the protocol. This leads to a more realistic model in comparison to what we get under the perfect cryptography assumption, but it implies that protocol analysis deals with terms modulo some equational theory instead of terms in a free algebra. The main goal of this paper is to setup a general approach that works for a whole class of monoidal theories which contains many of the specific cases that have been considered so far in an ad-hoc way (e.g. exclusive or, Abelian groups, exclusive or in combination with the homomorphism axiom). We follow a classical schema for cryptographic protocol analysis which proves first a locality result and then reduces the insecurity problem to a symbolic constraint solving problem. This approach strongly relies on the correspondence between a monoidal theory E and a semiring S\"E which we use to deal with the symbolic constraints. We show that the well-defined symbolic constraints that are generated by reasonable protocols can be solved provided that unification in the monoidal theory satisfies some additional properties. The resolution process boils down to solving particular quadratic Diophantine equations that are reduced to linear Diophantine equations, thanks to linear algebra results and the well-definedness of the problem. Examples of theories that do not satisfy our additional properties appear to be undecidable, which suggests that our characterization is reasonably tight.", "In formal approaches, messages sent over a network are usually modeled by terms together with an equational theory, axiomatizing the properties of the cryptographic functions (encryption, exclusive or, ...). The analysis of cryptographic protocols requires a precise understanding of the attacker knowledge. Two standard notions are usually used: deducibility and indistinguishability. Only few results have been obtained (in an ad-hoc way) for equational theories with associative and commutative properties, especially in the case of static equivalence. The main contribution of this paper is to propose a general setting for solving deducibility and indistinguishability for an important class (called monoidal) of these theories. Our setting relies on the correspondence between a monoidal theory E and a semiring SE which allows us to give an algebraic characterization of the deducibility and indistinguishability problems. As a consequence we recover easily existing decidability results and obtain several new ones.", "Cryptographic protocols are small programs which involve a high level of concurrency and which are difficult to analyze by hand. The most successful methods to verify such protocols are based on rewriting techniques and automated deduction in order to implement or mimic the process calculus describing the execution of a protocol. We are interested in the intruder deduction problem, that is vulnerability to passive attacks in presence of equational theories which model the protocol specification and properties of the cryptographic operators. In the present paper, we consider the case where the encryption distributes over the operator of an Abelian group or over an exclusive-or operator. We prove decidability of the intruder deduction problem in both cases. We obtain a PTIME decision procedure in a restricted case, the so-called binary case. These decision procedures are based on a careful analysis of the proof system modeling the deductive power of the intruder, taking into account the algebraic properties of the equational theories under consideration. The analysis of the deduction rules interacting with the equational theory relies on the manipulation of Z-modules in the general case, and on results from prefix rewriting in the binary case.", "In formal approaches, messages sent over a network are usually modeled by terms together with an equational theory, axiomatizing the properties of the cryptographic functions (encryption, exclusive or, ...). The analysis of cryptographic protocols requires a precise understanding of the attacker knowledge. Two standard notions are usually considered: deducibility and indistinguishability. Those notions are well-studied and several decidability results already exist to deal with a variety of equational theories. However most of the results are dedicated to specific equational theories. We show that decidability results can be easily combined for any disjoint equational theories: if the deducibility and indistinguishability relations are decidable for two disjoint theories, they are also decidable for their union. As an application, new decidability results can be obtained using this combination theorem.", "The analysis of security protocols requires precise formulations of the knowledge of protocol participants and attackers. In formal approaches this knowledge is often treated in terms of message deducibility and indistinguishability relations. In this paper we study the decidability of these two relations. The messages in question may employ functions (encryption, decryption, etc.) axiomatized in an equational theory. One of our main positive results says that deducibility and indistinguishability are both decidable in polynomial time for a large class of equational theories. This class of equational theories is defined syntactically and includes, for example, theories for encryption, decryption, and digital signatures. We also establish general decidability theorems for an even larger class of theories. These theorems require only loose, abstract conditions, and apply to many other useful theories, for example with blind digital signatures, homomorphic encryption, XOR, and other associative-commutative functions.", "We present decidability results for the verification of cryptographic protocols in the presence of equational theories corresponding to xor and Abelian groups. Since the perfect cryptography assumption is unrealistic for cryptographic primitives with visible algebraic properties such as xor, we extend the conventional Dolev-Yao model by permitting the intruder to exploit these properties. We show that the ground reachability problem in NP for the extended intruder theories in the cases of xor and Abelian groups. This result follows from a normal proof theorem. Then, we show how to lift this result in the xor case: we consider a symbolic constraint system expressing the reachability (e.g., secrecy) problem for a finite number of sessions. We prove that such a constraint system is decidable, relying in particular on an extension of combination algorithms for unification procedures. As a corollary, this enables automatic symbolic verification of cryptographic protocols employing xor for a fixed number of sessions.", "Most of the decision procedures for symbolic analysis of protocols are limited to a fixed set of algebraic operators associated with a fixed intruder theory. Examples of such sets of operators comprise XOR, multiplication exponentiation, abstract encryption decryption. In this paper we give an algorithm for combining decision procedures for arbitrary intruder theories with disjoint sets of operators, provided that solvability of ordered intruder constraints, a slight generalization of intruder constraints, can be decided in each theory. This is the case for most of the intruder theories for which a decision procedure has been given. In particular our result allows us to decide trace-based security properties of protocols that employ any combination of the above mentioned operators with a bounded number of sessions." ] }
0804.0273
1782352263
We consider the problem of intruder deduction in security protocol analysis: that is, deciding whether a given message M can be deduced from a set of messages Γ under the theory of blind signatures and arbitrary convergent equational theories modulo associativity and commutativity (AC) of certain binary operators. The traditional formulations of intruder deduction are usually given in natural-deduction-like systems and proving decidability requires significant effort in showing that the rules are "local" in some sense. By using the well-known translation between natural deduction and sequent calculus, we recast the intruder deduction problem as proof search in sequent calculus, in which locality is immediate. Using standard proof theoretic methods, such as permutability of rules and cut elimination, we show that the intruder deduction problem can be reduced, in polynomial time, to the elementary deduction problems, which amounts to solving certain equations in the underlying individual equational theories. We further show that this result extends to combinations of disjoint AC-convergent theories whereby the decidability of intruder deduction under the combined theory reduces to the decidability of elementary deduction in each constituent theory. Although various researchers have reported similar results for individual cases, our work shows that these results can be obtained using a systematic and uniform methodology based on the sequent calculus.
It remains to be seen whether sequent calculus, and its associated proof techniques, can prove useful for richer theories. For certain deduction problems, i.e., those in which the constructors interact with the equational theory, there does not seem to be general results like the ones we obtain for theories with no interaction with the constructors. One natural problem where this interaction occurs is the theory with homomorphic encryption, e.g., like the one considered in @cite_3 . Another interesting challenge is to see how sequent calculus can be used to study the more difficult problem of solving intruder deduction constraints, e.g., like those studied in @cite_13 @cite_10 @cite_18 .
{ "cite_N": [ "@cite_10", "@cite_18", "@cite_13", "@cite_3" ], "mid": [ "2108533001", "1544129038", "2130123502", "1992375708" ], "abstract": [ "We provide a method for deciding the insecurity of cryptographic protocols in presence of the standard Dolev-Yao intruder (with a finite number of sessions) extended with so-called oracle rules, i.e., deduction rules that satisfy certain conditions. As an instance of this general framework, we ascertain that protocol insecurity is in NP for an intruder that can exploit the properties of the XOR operator. This operator is frequently used in cryptographic protocols but cannot be handled in most protocol models. An immediate consequence of our proof is that checking whether a message can be derived by an intruder (using XOR) is in P. We also apply our framework to an intruder that exploits properties of certain encryption modes such as cipher block chaining (CBC).", "Security of a cryptographic protocol for a bounded number of sessions is usually expressed as a symbolic trace reachability problem. We show that symbolic trace reachability for well-defined protocols is decidable in presence of the exclusive or theory in combination with the homomorphism axiom. These theories allow us to model basic properties of important cryptographic operators This trace reachability problem can be expressed as a system of symbolic deducibility constraints for a certain inference system describing the capabilities of the attacker. One main step of our proof consists in reducing deducibility constraints to constraints for deducibility in one step of the inference system. This constraint system, in turn, can be expressed as a system of quadratic equations of a particular form over ℤ 2ℤ[h], the ring of polynomials in one indeterminate over the finite field ℤ 2ℤ. We show that satisfiability of such systems is decidable", "We present decidability results for the verification of cryptographic protocols in the presence of equational theories corresponding to xor and Abelian groups. Since the perfect cryptography assumption is unrealistic for cryptographic primitives with visible algebraic properties such as xor, we extend the conventional Dolev-Yao model by permitting the intruder to exploit these properties. We show that the ground reachability problem in NP for the extended intruder theories in the cases of xor and Abelian groups. This result follows from a normal proof theorem. Then, we show how to lift this result in the xor case: we consider a symbolic constraint system expressing the reachability (e.g., secrecy) problem for a finite number of sessions. We prove that such a constraint system is decidable, relying in particular on an extension of combination algorithms for unification procedures. As a corollary, this enables automatic symbolic verification of cryptographic protocols employing xor for a fixed number of sessions.", "Cryptographic protocols are small programs which involve a high level of concurrency and which are difficult to analyze by hand. The most successful methods to verify such protocols are based on rewriting techniques and automated deduction in order to implement or mimic the process calculus describing the execution of a protocol. We are interested in the intruder deduction problem, that is vulnerability to passive attacks in presence of equational theories which model the protocol specification and properties of the cryptographic operators. In the present paper, we consider the case where the encryption distributes over the operator of an Abelian group or over an exclusive-or operator. We prove decidability of the intruder deduction problem in both cases. We obtain a PTIME decision procedure in a restricted case, the so-called binary case. These decision procedures are based on a careful analysis of the proof system modeling the deductive power of the intruder, taking into account the algebraic properties of the equational theories under consideration. The analysis of the deduction rules interacting with the equational theory relies on the manipulation of Z-modules in the general case, and on results from prefix rewriting in the binary case." ] }
0804.1173
2951145471
Given a set of unit-disks in the plane with union area @math , what fraction of @math can be covered by selecting a pairwise disjoint subset of the disks? Rado conjectured 1 4 and proved @math . Motivated by the problem of channel-assignment for wireless access points, in which use of 3 channels is a standard practice, we consider a variant where the selected subset of disks must be 3-colourable with disks of the same colour pairwise-disjoint. For this variant of the problem, we conjecture that it is always possible to cover at least @math of the union area and prove @math . We also provide an @math algorithm to select a subset achieving a @math bound.
A more sophisticated formalization of the deployment problem allows disks assigned to the same channel to overlap but only counts the area where there is no interference---i.e. the area of the set of points covered by only one disk on some channel. In terms of colouring, the problem is to colour a subset of the given disks to maximize the area of @math for some colour, point @math is in exactly one disk of that colour @math . We call this the . @cite_1 proved that it is always possible to achieve approximately @math 1-covered area using only one colour. This model has also been considered with respect to two other optimization problems. For the problem of maximizing the 1-covered area using one colour, previous work has focused on approximation algorithms (though no proof yet exists, it is suspected that this problem is NP-hard). In @cite_1 , present a 5.83-approximation algorithm with polynomial runtime. In @cite_14 , show that the problem admits a polynomial time approximation scheme when the radius of the largest disk over the radius of the smallest disk is a constant.
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2060662766", "2099599968" ], "abstract": [ "In this paper, we study the following disc covering problem: Given a set of discs of various radii on the plane and centers on the grid points, find a subset of discs to maximize the area covered by exactly one disc. Thisproblem originates from the application in digital halftoning, with the best known approximation factor being 5.83 (, 2004). We show that if their radii are between two positive constants, then there exists a polynomial time approximation scheme. Our techniques are based on the width-bounded geometric separator recently developed in Fu and Wang (2004), Fu (2006).", "This paper considers the following geometric optimization problem: Input is a matrix R=(r ij ). Each entry r ij represents a radius of a disc with its center at (i,j) in the plane. We want to choose discs in such a way that the total area covered by exactly one disc is maximized. This problem is closely related to digital halftoning, a technique to convert a continuous-tone image into a binary image for printing. An exact algorithm is given for the one-dimensional version of the problem while approximation algorithms are given for the two-dimensional one. The approximation algorithms are verified to be satisfactory in practice through experiments in applications to digital halftoning." ] }
0804.1173
2951145471
Given a set of unit-disks in the plane with union area @math , what fraction of @math can be covered by selecting a pairwise disjoint subset of the disks? Rado conjectured 1 4 and proved @math . Motivated by the problem of channel-assignment for wireless access points, in which use of 3 channels is a standard practice, we consider a variant where the selected subset of disks must be 3-colourable with disks of the same colour pairwise-disjoint. For this variant of the problem, we conjecture that it is always possible to cover at least @math of the union area and prove @math . We also provide an @math algorithm to select a subset achieving a @math bound.
Another well-explored optimization problem is ---here the goal is to minimize the number of colours needed to 1-cover the whole area, i.e. the union of the given disks. @cite_0 prove that @math colours are always sufficient and sometimes necessary for any given disks of general radii. @cite_4 have shown that, if each disk intersects at most @math others, then @math colours are sufficient for a conflict-free colouring, improving the bound from @cite_0 when @math is much smaller than @math . There is also work on online algorithms for conflict-free colouring @cite_10 , and on conflict-free colouring of regions other than disks @cite_8 .
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_4", "@cite_8" ], "mid": [ "2112505974", "2046633098", "", "2007506094" ], "abstract": [ "Motivated by a frequency assignment problem in cellular networks, we introduce and study a new coloring problem that we call minimum conflict-free coloring (min-CF-coloring). In its general form, the input of the min-CF-coloring problem is a set system @math , where each @math is a subset of X. The output is a coloring @math of the sets in @math that satisfies the following constraint: for every @math there exists a color @math and a unique set @math such that @math and @math . The goal is to minimize the number of colors used by the coloring @math . Min-CF-coloring of general set systems is not easier than the classic graph coloring problem. However, in view of our motivation, we consider set systems induced by simple geometric regions in the plane. In particular, we study disks (both congruent and noncongruent), axis-parallel rectangles (with a constant ratio between the smallest and largest rectangle), regular hexagons (with a constant ratio between the smallest and largest hexagon), and general congruent centrally symmetric convex regions in the plane. In all cases we have coloring algorithms that use O(log n) colors (where n is the number of regions). Tightness is demonstrated by showing that even in the case of unit disks, @math colors may be necessary. For rectangles and hexagons we also obtain a constant-ratio approximation algorithm when the ratio between the largest and smallest rectangle (hexagon) is a constant. We also consider a dual problem of CF-coloring points with respect to sets. Given a set system @math , the goal in the dual problem is to color the elements in X with a minimum number of colors so that every set @math contains a point whose color appears only once in S. We show that O(log |X|) colors suffice for set systems in which X is a set of points in the plane and the sets are intersections of X with scaled translations of a convex region. This result is used in proving that O(log n) colors suffice in the primal version.", "We consider an online version of the conflict-free coloring of a set of points on the line, where each newly inserted point must be assigned a color upon insertion, and at all times the coloring has to be conflict-free, in the sense that in every interval I there is a color that appears exactly once in I. We present several deterministic and randomized algorithms for achieving this goal, and analyze their performance, that is, the maximum number of colors that they need to use, as a function of the number n of inserted points. We first show that a natural and simple (deterministic) approach may perform rather poorly, requiring Ω(√n) colors in the worst case. We then modify this approach, to obtain an efficient deterministic algorithm that uses a maximum of Θ(log2 n) colors. Next, we present two randomized solutions. The first algorithm requires an expected number of at most O(log2 n) colors, and produces a coloring which is valid with high probability, and the second one, which is a variant of our efficient deterministic algorithm, requires an expected number of at most O(log n log log n) colors but always produces a valid coloring. We also analyze the performance of the simplest proposed algorithm when the points are inserted in a random order, and present an incomplete analysis that indicates that, with high probability, it uses only O(log n) colors. Finally, we show that in the extension of this problem to two dimensions, where the relevant ranges are disks, n colors may be required in the worst case. The average-case behavior for disks, and cases involving other planar ranges, are still open.", "", "In this paper, we study coloring problems related to frequency assignment problems in cellular networks. In abstract setting, the problems are of the following two types:CF-coloring of regions: Given a finite family S of n regions of some fixed type (such as discs, pseudo-discs, axis-parallel rectangles, etc.), what is the minimum integer k, such that one can assign a color to each region of S, using a total of at most k colors, such that the resulting coloring has the following property: For each point p ∈b∈S b there is at least one region b∈S that contains p in its interior, whose color is unique among all regions in S that contain p in their interior (in this case we say that p is being served' by that color). We refer to such a coloring as a conflict-free coloring of S (CF-coloring in short).CF-coloring of a range space: Given a set P of n points in Rd and a set R of ranges (for example, the set of all discs in the plane), what is the minimum integer k, such that one can color the points of P by k colors, so that for any r ∈ R with P∩r∈≠O, there is at least one point q ∈ P ∩ r that is assigned a unique color among all colors assigned to points of P ∩ r (in this case we say that r is 'served' by that color). We refer to such a coloring as a conflict-free coloring of (P,R) (CF-coloring in short)." ] }