id
stringlengths 7
7
| title
stringlengths 3
578
| abstract
stringlengths 0
16.7k
| keyphrases
sequence | prmu
sequence |
---|---|---|---|---|
-yLZ7v: | Understanding the relationships between interest in online math games and academic performance | Although the Internet is widely used by students in both formal and informal environments, little is known about how and where youth spend their time online. Using Internet search and Web analytics data, this study discovered a large-scale phenomenon associated with the poor performance of elementary school students in the USA that has been overlooked by educational researchers. This study found that approximately 10 million Internet users in the USA, many of whom are presumably youth, spend about 89 million hours in a year on a popular math game site that targets children. The number of game site users is equivalent to half of the K-5 Internet population in the USA. However, there is little evidence that the math games on the website meet the criteria for effective instruction as described in the literature. This study found a significant negative correlation between search volumes for the game site in the 50 states in the USA and 4th grade students' performance in mathematics and reading. Moreover, Internet users in the states with greater numbers of low-income families and fewer college graduates were more likely to search for the game site. The implications of these findings are discussed. | [
"math games",
"web analytics",
"digital divide",
"internet use",
"search trends"
] | [
"P",
"P",
"U",
"R",
"M"
] |
1cdmmGT | What happens to computer science research after it is published? Tracking CS research lines | Are computer science papers extended after they are published? We have surveyed 200 computer science publications, 100 journal articles, and 100 conference papers, using self-citations to identify potential and actual continuations. We are interested in determining the proportion of papers that do indeed continue, how and when the continuation takes place, and whether any distinctions are found between the journal and conference populations. Despite the implicit assumption of a research line behind each paper, manifest in the ubiquitous future research notes that close many of them, we find that more than 70% of the papers are never continued. | [
"computer science",
"bibliometrics",
"quantitative research"
] | [
"P",
"U",
"M"
] |
1mvgMV9 | Parallel algebraic multigrid based on subdomain blocking | The algebraic multigrid (AMG) approach provides a purely algebraic means to tackle the efficient solution of systems of equations posed on large unstructured grids, in 2D and 3D. While sequential AMG has been used for increasingly large problems (with several million unknowns), its application to even larger applications requires a parallel version. Since, in contrast to geometric multigrid, the hierarchy of coarser levels and the related operators develop dynamically during the setup phase of AMG, a direct parallelization is very complicated. Moreover, a naive parallelization would, in general, require unpredictable and highly complex communication patterns which seriously limit the achievable scalability, in particular of the costly setup phase. In this paper, we consider a classical AMG variant which has turned out be highly robust and efficient in solving large systems of equations corresponding to elliptic PDEs, discretized by finite differences or finite volumes. Based on a straightforward partitioning of variables (using one of the available algebraic partitioning tools such as Metis), a parallelization approach is proposed which minimizes the communication without sacrificing convergence in complex situations. Results will be presented for industrial CFD and oil-reservoir simulation applications on distributed memory machines, including PC-clusters. | [
"parallel",
"algebraic multigrid",
"amg",
"unstructured grids",
"unstructured matrices",
"hierarchical solvers"
] | [
"P",
"P",
"P",
"P",
"M",
"U"
] |
2:iYb-& | Stochastic Differential Games in Insider Markets via Malliavin Calculus | In this paper, we use techniques of Malliavin calculus and forward integration to present a general stochastic maximum principle for anticipating stochastic differential equations driven by a Lvy type of noise. We apply our result to study a general stochastic differential game problem of an insider. | [
"stochastic differential game",
"malliavin calculus",
"maximum principle",
"jump diffusion",
"stochastic control",
"insider information"
] | [
"P",
"P",
"P",
"U",
"M",
"M"
] |
3Ue8ffB | Degradation of n-channel low temperature poly-Si TFTs dynamically stressed in OFF region with positive drain bias | The degradation characteristics of n-channel low temperature poly-Si thin film transistors (LTPS TFTs), which are alternately stressed in OFF region with drain positively biased and source grounded, are investigated. In this research, rectangular pulse signals, dynamically changing from ?18 to 0V with varied parameters such as rising time, falling time, and frequency, are applied to the gate terminal, and the drain is simultaneously biased at +5V to stress the LTPS TFTs and examined the deterioration. It is observed that the degradation strongly depends on the frequency and rising time rather than the falling time of AC signals. As the gate voltage transitionally changes in the rising period, the accumulated holes should be swept out and flow into the source terminal, resulting from the drain with positively biased and the floating body structure of TFTs. A degradation model of the parasitic BJT, based on the flowing direction of a sampling current Id, is proposed to explain the degradation mechanism of LTPS TFTs, and demonstrated by two electrical measurements, CV curves and saturated forward and reverse IdVg transfer curves. | [
"low temperature poly-si",
"sampling current",
"thin-film transistors",
"ac stress",
"reliability"
] | [
"P",
"P",
"M",
"R",
"U"
] |
v6rMFU7 | A simple duality proof in convex quadratic programming with a quadratic constraint, and some applications | In this paper a simple derivation of duality is presented for convex quadratic programs with a convex quadratic constraint. This problem arises in a number of applications including trust region subproblems of nonlinear programming, regularized solution of ill-posed least squares problems, and ridge regression problems in statistical analysis. In general, the dual problem is a concave maximization problem with a linear equality constraint. We apply the duality result to: (1) the trust region subproblem, (2) the smoothing of empirical functions, and (3) to piecewise quadratic trust region subproblems arising in nonlinear robust Huber M-estimation problems in statistics. The results are obtained from a straightforward application of Lagrange duality. | [
"convex quadratic programming with a convex quadratic constraint",
"trust region subproblems",
"ill-posed least squares problems",
"lagrange duality"
] | [
"P",
"P",
"P",
"P"
] |
-Gh&gTk | Finite satisfiability for guarded fixpoint logic | The finite satisfiability problem for guarded fixpoint logic is decidable and complete for 2ExpTIME (resp. ExpTIME for formulas of bounded width). (C) 2012 Elsevier B.V. All rights reserved. | [
"finite satisfiability",
"guarded fixpoint logic",
"formal methods",
"guarded fragment"
] | [
"P",
"P",
"U",
"M"
] |
5-nWk2E | Mapping streaming applications on multiprocessors with time-division-multiplexed network-on-chip ? | This paper addresses mapping of streaming applications (such as MPEG) on multiprocessor platforms with time-division-multiplexed network-on-chip. In particular, we solve processor selection, path selection and router configuration problems. Given the complexity of these problems, state of the art approaches in this area largely rely on greedy heuristics, which do not guarantee optimality. Our approach is based on a constraint programming formulation that merges a number of steps, usually tackled in sequence in classic approaches. Thus, our method has the potential of finding optimal solutions with respect to resource usage under throughput constraints. The experimental evaluation presented in here shows that our approach is capable of exploring a range of solutions while giving the designer the opportunity to emphasize the importance of various design metrics. | [
"mapping",
"time-division-multiplexed network-on-chip",
"constraint programming",
"dataflow"
] | [
"P",
"P",
"P",
"U"
] |
3WtEmmi | RBF network methods for face detection and attentional frames | In this paper we introduce a set of adaptive vision techniques which could be used, for example, in video-conferencing applications. First, we present methods for finding faces and selecting attentional frames to focus visual processing. Second, we present methods for recognising individual gesture phases for camera control. Finally, we discuss how these techniques can be extended to 'virtual groups' of multiple people interacting at multiple sites. | [
"rbf networks",
"face detection",
"gesture recognition",
"head pose invariance",
"real-time vision applications",
"time-delay networks",
"visually mediated interaction"
] | [
"P",
"P",
"M",
"U",
"M",
"M",
"M"
] |
-oq4Vso | Computing the Principal Local Binary Patterns for face recognition using data mining tools | Local Binary Patterns are considered as one of the texture descriptors with better results; they employ a statistical feature extraction by means of the binarization of the neighborhood of every image pixel with a local threshold determined by the central pixel. The idea of using Local Binary Patterns for face description is motivated by the fact that faces can be seen as a composition of micro-patterns which are properly described by this operator and, consequently, it has become a very popular technique in recent years. In this work, we show a method to calculate the most important or Principal Local Binary Patterns for recognizing faces. To do this, the attribute evaluator algorithm of the data mining tool Weka is used. Furthermore, since we assume that each face region has a different influence on the recognition process, we have designed a 9-region mask and obtained a set of optimized weights for this mask by means of the data mining tool RapidMiner. Our proposal was tested with the FERET database and obtained a recognition rate varying between 90% and 94% when using only 9 uniform Principal Local Binary Patterns, for a database of 843 individuals; thus, we have reduced both the dimension of the feature vectors needed for completing the recognition tasks and the processing time required to compare all the faces in the database. | [
"local binary patterns",
"face recognition",
"data mining"
] | [
"P",
"P",
"P"
] |
1ogj-PE | identifying emergent leadership in small groups using nonverbal communicative cues | This paper addresses firstly an analysis on how an emergent leader is perceived in newly formed small-groups, and secondly, explore correlations between perception of leadership and automatically extracted nonverbal communicative cues. We hypothesize that the difference in individual nonverbal features between emergent leaders and non-emergent leaders is significant and measurable using speech activity. Our results on a new interaction corpus show that such an approach is promising, identifying the emergent leader with an accuracy of up to 80\%. | [
"emergent leadership",
"speech activity",
"nonverbal behavior"
] | [
"P",
"P",
"M"
] |
3X&1yWU | Analysis of nearest neighbor load balancing algorithms for random loads | Nearest neighbor load balancing algorithms, like diffusion, are popular due to their simplicity, flexibility, and robustness. We show that they are also asymptotically very efficient when a random rather than a worst case initial load distribution is considered. We show that diffusion needs ?((logn)2/d) balancing time on a d-dimensional mesh network with nd processors. Furthermore, some but not all of the algorithms known to perform better than diffusion in the worst case also perform better for random loads. We also present new results on worst case performance regarding the maximum load deviation. | [
"nearest neighbor load balancing algorithm",
"maximum load deviation",
"parallel algorithm analysis",
"load balancing random loads",
"diffusion load balancing"
] | [
"P",
"P",
"M",
"R",
"R"
] |
-Jfp7Py | wait-free queues with multiple enqueuers and dequeuers | The queue data structure is fundamental and ubiquitous. Lock-free versions of the queue are well known. However, an important open question is whether practical wait-free queues exist. Until now, only versions with limited concurrency were proposed. In this paper we provide a design for a practical wait-free queue. Our construction is based on the highly efficient lock-free queue of Michael and Scott. To achieve wait-freedom, we employ a priority-based helping scheme in which faster threads help the slower peers to complete their pending operations. We have implemented our scheme on multicore machines and present performance measurements comparing our implementation with that of Michael and Scott in several system configurations. | [
"wait-free algorithms",
"concurrent queues"
] | [
"M",
"R"
] |
55DwoC- | Impurity binding energies in semiconductor Fibonacci superlattices | We have calculated the density of states (DOS) of a GaAsGaAl superlattice whose ratio between the potential energy of the impurities and the potential energy of the atoms at the lattice is a nth power of the golden mean (n=1,2,3,). Our theory uses Dyson's equation together with a transfer-matrix treatment, within the tight-binding Hamiltonian model. The electronic DOS is calculated stressing the regions of frequency where the transfer function is complex, which correspond to non-localized states. | [
"impurities",
"fibonacci superlattices",
"electron density of status"
] | [
"P",
"P",
"M"
] |
2kUQnto | a systematic approach to advanced debugging through incremental compilation (preliminary draft) | This paper presents two topics: implementation of a debugger through use of an incremental compiler, and techniques for fine-grained incremental compilation. Both the debugger and the compiler are components of the highly integrated programming environment DICE (Distributed Incremental Compiling Environment) which aims at providing programmer support in the case where the programming environment resides in a host computer and the program is running on a target computer that is connected to the host. Commands to the debugger command level includes all legitimate PASCAL statements. The debugger is machine-independent - it calls the incremental compiler which generates code for evaluation of commands, or modifies the machine code of the target program for insertion of breakpoints etc. Essentially all machine-dependences are isolated inside the code generator of the incremental compiler. | [
"debugging",
"incremental",
"compilation",
"paper",
"implementation",
"use",
"component",
"integrability",
"programming environment",
"environments",
"distributed",
"programmer",
"support",
"case",
"computation",
"code",
"evaluation",
"code generation",
"dependencies"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U"
] |
-hCbnf7 | A linear-time self-stabilizing algorithm for the minimal 2-dominating set problem in general networks | Kamei and Kakugawa have recently proposed a self-stabilizing algorithm for the minimal k-dominating set problem. Their algorithm is a general form of the maximal-independent-set algorithm proposed by Shukla et al. The results in their paper are for any tree network that assumes Dijkstra's central demon model. In particular, the worst-case stabilization time is claimed to be O(n(2)), where n is the number of nodes in the system. In this paper, we generalize their results for the case k = 2. We show that their algorithm with k = 2, when operating in any general network, is self-stabilizing under the central demon model, and solves the minimal 2-dominating set problem. We also derive that the worst-case stabilization time is linear, i.e., O(n). A bounded function technique is employed in obtaining these results. | [
"self-stabilizing algorithm",
"general network",
"minimal k-dominating set",
"tree network",
"central demon model",
"bounded function",
"maximal independent set",
"cut point"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"M",
"U"
] |
sCjUiVz | An ant colony optimization routing based on robustness for ad hoc networks with GPSs | Ant colony optimization (ACO) routing algorithm is one of adaptive and efficient routing algorithms for mobile ad hoc networks (MANETs). In ACO routing algorithms, ant-like agents traverse the network to search a path from a source to a destination, and lay down pheromone on the path. A data packet is transferred along a path selected with probability based on the amount of pheromone. The amount of pheromone laid down on a path depends on its quality, such as its number of hops and communication delay. However, in MANETs, continuous movement of nodes causes dynamic network change with time. Thus, even if a path with a small number of hops and short communication delay has much pheromone, it may become unavailable quickly due to link disconnections. Therefore, we focus on robustness of paths to construct paths that are not likely to be disconnected during a long period. In this paper, we propose a new ACO routing algorithm based on robustness of paths for MANETs with global positioning system (GPS): each ant-like agent evaluates robustness of a path using GPS information of visited nodes and decides the amount of pheromone to lay down based on the robustness. Moreover, in our algorithm, each node predicts link disconnections from neighbors GPS information in order to adapt to dynamic network change. To keep paths available, when a node predicts a link disconnection, it redistributes the pheromone on the link to be disconnected so that construction of alternative paths can be accelerated. Simulation results show that our algorithm achieves higher packet delivery ratio with lower communication cost than AntHocNet and LAR. | [
"ant colony optimization",
"robustness",
"gps",
"routing algorithm",
"mobile ad hoc networks"
] | [
"P",
"P",
"P",
"P",
"P"
] |
-64&nNX | morphing simple polygons | In this paper we investigate the problem of morphing (i.e. continuously deforming) one simple polygon into another. We assume that our two initial polygons have the same number of sides n , and that corresponding sides are parallel. We show that a morph is always possible by a varying simple interpolating polygon also of n sides parallel to those of the two original ones. If we consider a uniform scaling or translation of part of the polygon as an atomic morphing step, then we show that O ( n 4/3+&egr; ) such steps are sufficient for the morph. | [
"morph",
"polygon",
"paper",
"parallel",
"interpolation",
"scale",
"translation",
"atom"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
31MEn4h | what hypertext is | Over the past couple decades, as the term "hypertext" has gained a certain popular currency, a question has been raised repeatedly: "What is hypertext?" Our most respected scholars offer a range of different, at times incompatible, answers. This paper argues that our best response to this situation is to adopt the approach taken with other terms that are central to intellectual communities (such as "natural selection," "communism," and "psychoanalysis"), a historical approach. In the case of "hypertext" the term began with Theodor Holm ("Ted") Nelson, and in this paper two of his early publications of "hypertext" are used to determine its initial meaning: the 1965 "A File Structure for the Complex, the Changing, and the Indeterminate" and the 1970 "No More Teachers' Dirty Looks." It is concluded that hypertext began as a term for forms of hypermedia (human-authored media that "branch or perform on request") that operate textually. This runs counter to definitions of hypertext in the literary community that focus solely on the link. It also runs counter to definitions in the research community that privilege tools for knowledge work over media. An inclusive future is envisioned. | [
"hypertext",
"paper",
"response",
"situated",
"communities",
"select",
"case",
"public",
"mean",
"structure",
"complexity",
"hypermedia",
"media",
"definition",
"linking",
"research",
"tools",
"knowledge work",
"future",
"hyperfilm",
"human",
"stretchtext",
"hypergram"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U",
"U",
"U",
"U"
] |
1rnw8-t | Improper Coloring of Unit Disk Graphs | Motivated by a satellite communications problem, we consider a generalized coloring problem on unit disk graphs. A coloring is k-improper if no more than k neighbors of every vertex have the same colour as that assigned to the vertex. The k-improper chromatic number chi(k)(G) is the least number of colors needed in a k-improper coloring of a graph G. The main subject of this work is analyzing the complexity of computing chi(k) for the class of unit disk graphs and some related classes, e.g., hexagonal graphs and interval graphs. We show NP-completeness in many restricted cases and also provide both positive and negative approximability results. Because of the challenging nature of this topic, many seemingly simple questions remain: for example, it remains open to determine the complexity of computing chi(k) for unit interval graphs. (C) 2009 Wiley Periodicals, Inc. NETWORKS, Vol. 54(3),150-164 2009 | [
"improper coloring",
"unit disk graph",
"hexagonal graph",
"interval graph",
"defective coloring",
"triangular lattice",
"weighted coloring"
] | [
"P",
"P",
"P",
"P",
"M",
"U",
"M"
] |
4FvSqi8 | Improving the kernel regularized least squares method for small-sample regression | The kernel regularizedleast squares (KRLS) method uses the kernel trick to perform non-linear regression estimation. Its performance depends on proper selection of both a kernel function and a regularization parameter. In practice, cross-validation along with the Gaussian RBF kernel have been widely used for carrying out model selection for KRLS. However, when training data is scarce, this combination often leads to poor regression estimation. In order to mitigate this issue, we follow two lines of investigation in this paper. First, we explore a new type of kernel function that is less susceptible to overfitting than the RBF kernel. Then, we consider alternative parameter selection methods that have been shown to perform well for other regression methods. Experiments conducted on real-world datasets show that an additive spline kernel greatly outperforms both the RBF and a previously proposed multiplicative spline kernel. We also find that the parameter selection procedure Finite Prediction Error (FPE) is a competitive alternative to cross-validation when using the additive splines kernel. | [
"kernel regularizedleast squares",
"non-linear regression",
"cross-validation",
"rbf kernel",
"parameter selection",
"spline kernel"
] | [
"P",
"P",
"P",
"P",
"P",
"P"
] |
1arpfwK | secure network provenance | This paper introduces secure network provenance (SNP) , a novel technique that enables networked systems to explain to their operators why they are in a certain state -- e.g., why a suspicious routing table entry is present on a certain router, or where a given cache entry originated. SNP provides network forensics capabilities by permitting operators to track down faulty or misbehaving nodes, and to assess the damage such nodes may have caused to the rest of the system. SNP is designed for adversarial settings and is robust to manipulation; its tamper-evident properties ensure that operators can detect when compromised nodes lie or falsely implicate correct nodes. We also present the design of SNooPy, a general-purpose SNP system. To demonstrate that SNooPy is practical, we apply it to three example applications: the Quagga BGP daemon, a declarative implementation of Chord, and Hadoop MapReduce. Our results indicate that SNooPy can efficiently explain state in an adversarial setting, that it can be applied with minimal effort, and that its costs are low enough to be practical. | [
"security",
"provenance",
"distributed systems",
"evidence",
"accountability",
"byzantine faults"
] | [
"P",
"P",
"M",
"U",
"U",
"U"
] |
2TmKanA | Invariant representative cocycles of cohomology generators using irregular graph pyramids | Structural pattern recognition describes and classifies data based on the relationships of features and parts. Topological invariants, like the Euler number, characterize the structure of objects of any dimension. Cohomology can provide more refined algebraic invariants to a topological space than does homology. It assigns quantities to the chains used in homology to characterize holes of any dimension. Graph pyramids can be used to describe subdivisions of the same object at multiple levels of detail. This paper presents cohomology in the context of structural pattern recognition and introduces an algorithm to efficiently compute representative cocycles (the basic elements of cohomology) in 2D using a graph pyramid. An extension to obtain scanning and rotation invariant cocycles is given. | [
"representative cocycles of cohomology generators",
"graph pyramids"
] | [
"P",
"P"
] |
--ncowB | Simulation of IPA gradients in hybrid network systems | Infinitesimal perturbation analysis (IPA) provides formulas for random gradients (derivatives) of performance measures with respect to parameters of interest, computed from sample paths of stochastic systems. In practice, IPA derivatives may be computed either from simulation runs or from empirical field data (when the formulas are nonparametric). Nonparametric IPA derivatives in fluid-flow queues have been recently derived for the loss volume and time average of buffer occupancy, with respect to buffer size, and arrival-rate or service-rate parameters. Additionally, these IPA derivatives have been shown to be unbiased in the sense that their expectation and differentiation operators commute, while their traditional discrete counterparts have long been known to be generally biased. Recent work has further shown how to map the computation of IPA derivatives from a fluid-flow queue to a compatible discrete counterpart without an appreciable loss of accuracy in performance measures. Thus, this work holds the promise of potential applications of IPA derivatives to gradient-based optimization of objective functions involving performance metrics parameterized by settable parameters in a queueing network context. This paper is an empirical study of IPA derivatives of individual queues within queueing systems which model telecommunications networks and some of their protocols. As a testbed, we used HNS (Hybrid Network Simulator) a hybrid Java simulator of queueing networks with traffic streams subject to several telecommunications protocols. More specifically, the hybrid feature of HNS admits models with mixtures of discrete (packet) flows and continuous (fluid) flows, and collects detailed statistics and IPA derivatives for all flow types. The paper outlines the mapping of IPA derivatives from the fluid domain to the packet domain as implemented in HNS, and studies the accuracy of IPA derivatives in compatible fluid and packet queueing models, as well as the stabilization of their values in time. Our experimental results lend empirical support to the contention that IPA derivatives can be accurately computed from discrete versions by adopting a fluid-flow view. Furthermore, the long-run values of various IPA derivatives are empirically shown to stabilize quite fast. Finally, the results provide the basis and motivation for IPA applications to the optimization of telecommunications network design and to potential new open-loop protocols that take advantage of IPA information. | [
"performance",
"design",
"algorithms"
] | [
"P",
"P",
"U"
] |
58BGet7 | Comparing software fault predictions of pure and zero-inflated Poisson regression models | Predicting the software quality prior to system tests and operations has proven to be useful for achieving effective reliability improvements. Poisson (pure) regression modelling is the most commonly used count modelling technique for predicting the expected number of faults in software modules. It is best suited to when the distribution of the fault data (dependent variable) is not biased, that is equidispersed fault data, whose mean equals the variance. However, in software fault data we often observe a large portion of zeros (no faults), especially in high-assurance systems. In such cases a pure Poisson regression model (PRM) may yield inaccurate fault predictions. A zero-inflated Poisson (ZIP) model changes the mean structure of a PRM, resulting in improved predictive quality. To illustrate the same, we examined software data collected from a full-scale industrial software system. Fault prediction models were calibrated using both pure Poisson and ZIP regression techniques. To prevent claims based on a biased data split (for the fit and test data sets), the data set was randomly split 50 times, and models were calibrated using each of these split combinations. A comparative hypothesis test between the pure Poisson and ZIP modelling techniques was performed. The test revealed that the ZIP model fitted better than its counterpart. Our comprehensive empirical comparative study presented in this paper showed that the ZIP model yielded better predictions than the PRM and also demonstrated better robustness in prediction accuracy across the 50 data splits. | [
"zero-inflated poisson regression",
"poisson regression",
"comparative hypothesis test",
"software quality estimation",
"performance metrics"
] | [
"P",
"P",
"P",
"M",
"M"
] |
2rcxsdv | pin assignment of circuit cards and the routability of multilayer printed wiring backplanes | This paper examines the relationship between the pin assignment for circuit cards and the routability of the multilayer printed wiring backplane on which the cards are mounted. It is shown that the pin assignment should meet three objectives in order to facilitate backplane routing. Heuristic strategies for determining a pin assignment to attain these objectives are given. These strategies have been implemented in a program which was used in two separate experiments involving two different backplane configurations. In both cases, the use of the pin assignments obtained with the program led to major improvements in backplane routability. In one of the experiments, as a worst-case study, the most difficult circuit card was selected for a new card layout. The new layout was obtained without much difficulty; thus the pin assignment given the selected card by the program was realized. This result demonstrates that it is not as formidable a task as is sometimes believed to implement a pin assignment for circuit cards obtained by backplane considerations. | [
"assignment",
"circuits",
"routing",
"paper",
"relationships",
"object",
"order",
"heuristics",
"strategies",
"use",
"experience",
"configurability",
"layout",
"task",
"case studies"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"R"
] |
3xEtrLM | Motion from occlusions ? | In computer vision, occlusions are almost always seen as undesirable singularities that pose difficult challenges to image motion analysis problems, such as optic flow computation, motion segmentation, disparity estimation, or egomotion estimation. However, it is well known that occlusions are extremely powerful cues for depth or motion perception, and could be used to improve those methods. In this paper, we propose to recover camera motion information based uniquely on occlusions, by observing two specially useful properties: occlusions are independent of the camera rotation, and reveal direct information about the camera translation. We assume a monocular observer, undergoing general rotational and translational motion in a static environment. We present a formal model for occlusion points and develop a method suitable for occlusion detection. Through the classification and analysis of the detected occlusion points, we show how to retrieve information about the camera translation (FOE). Experiments with real images are presented and discussed in the paper. | [
"image motion analysis",
"egomotion estimation",
"occlusion analysis",
"stereo matching"
] | [
"P",
"P",
"R",
"U"
] |
3WVfdan | GeMDA: A multidimensional data partitioning technique for multiprocessor database systems | Several studies have repeatedly demonstrated that both the performance and scalability of a shared nothing parallel database system depend on the physical layout of data across the processing nodes of the system. Today, data is allocated in these systems using horizontal partitioning strategies. This approach has a number of drawbacks. If a query involves the partitioning attribute, then typically only a small number of the processing nodes can be used to speedup the execution of this query. On the other hand, if the predicate of a selection query includes an attribute other than the partitioning attribute, then the entire data space must be searched. Again, this results in waste of computing resources. In recent years, several multidimensional data declustering techniques have been proposed to address these problems. However, these schemes are too restrictive (e.g., FX, ECC, etc.), or optimized for a certain type of queries (e.g., DM, HCAM, etc.). In this paper, we introduce a new technique which is flexible, and performs well for general queries, We prove its optimality properties, and present experimental results showing that our scheme outperforms DM and HCAM by a significant margin. | [
"parallel database system",
"data allocation",
"data fragmentation",
"query processing",
"system utilization"
] | [
"P",
"R",
"M",
"R",
"M"
] |
2JjJd6K | Global corporate web sites: an empirical investigation of content and design | Globally accessible web sites enable corporations to communicate with a wide variety of constituencies and represent a resource for any organization seeking a broad audience. Developing an effective multinational Internet presence requires designing web sites that operate in a diverse, multi-cultural environment. This is not simple, given that the field of web site development has lacked standards and rules relating to content and design. This study develops a conceptual model that differentiates web site content from design. The content component addresses the issue of what is included in the site and identifies the various types of information. The design component addresses presentation and navigational features. This conceptual web site content/design model was used to study the features of global corporate web sites to determine if the content and design features have become globally standardized or if differences exist as a result of national culture and/or industry. The majority of web site content features were found to be significantly different across various cultural groups. This, however, was not the case for design features. Furthermore, there appeared to be little association between the content and design features and industry classification. | [
"global corporations",
"web content",
"web design"
] | [
"P",
"R",
"R"
] |
-RX6Stj | Visualizing quantitatively the freshness of intact fresh pork using acousto-optical tunable filter-based visible/near-infrared spectral imagery | Although pork freshness is one of the top concerns to consumers, no systems are currently available to the pork industry that could quantitatively predict its spatial distribution in a rapid and nondestructive way. The main objective of this study was to investigate the feasibility of acousto-optical tunable filter (AOTF) based spectral imagery in the visible/near-infrared region for the non-destructive prediction and visualization of the spoilage-indicating chemicals over the surface of intact fresh pork. We developed an AOTF-based spectral imaging system (wavelength range: 550-1000 nm) to visualize pork freshness by mapping the predicted total volatile basic nitrogen (TVB-N) content over the surface. Reflectance hyperspectral images of pork loins in packages (n = 43) were acquired from day 3 to day 13 post-mortem, and the corresponding TVB-N references were recorded using conventional chemical procedures. The eligible muscle region of interest (EMROI) on a sample surface was auto-segmented, from which the signature spectrum was extracted. After standard normal variate (SNV) filtering, the signature spectra together with their chemical references were fed into a partial least squares regression (PLSR) to create a prediction model on a consecutive spectral range (575-940 nm). An analysis of the regression coefficients identified 9 important predictive wavelengths (575, 600, 615, 705, 765, 825, 885, 915, and 935 nm). The prediction model was subsequently refined to use the feature wavelengths only. A leave-one-out (LOO) cross-validation showed that the prediction of the TVB-N contents using the refined model was good and had a root mean square error (RMSEcv) of 1.94 mg/100 g and a coefficient of determination (R-cv(2)) of 0.89. Finally, the freshness distribution over an entire pork surface was visualized by mapping the pixel-wise TVB-N predictions in pseudo-colors based on the refined model. The spatial prediction was also verified in terms of mean and range. The mean values coincided well with their chemical references (with a R-2 of 0.81 and a RMSE of 2.58 mg/100 g), and the range is within reasonable limits (with 95% pixels within 0-50.0 mg/100 g). The results indicated that the AOTF-based spectral imagery system could be a promising method to predict pork freshness in an in situ test with unprecedented details of the spatial distribution of freshness. Industrial relevance: An AOTF-based VIS/NIR spectral imagery system has the potential for acceptance sampling in meat production plants or for hygienic supervision in the marketplace to predict the freshness of intact chill-stored pork. (C) 2013 Elsevier B.V. All rights reserved. | [
"pork",
"acousto-optical tunable filter (aotf)",
"total volatile basic nitrogen (tvb-n) content",
"electrically tunable filter (etf) based spectral imagery",
"nondestructive testing",
"visible/near-infrared hyperspectral imaging"
] | [
"P",
"P",
"P",
"M",
"R",
"R"
] |
4i3vnV1 | Corporate taxonomies: report on a survey of current practice | Presents a report of a survey, based on case studies, of current practice in the building and use of taxonomies, with particular emphasis on corporate taxonomies to support enterprise information portals. Of the 22 case studies, six were designated as "core" and four are discussed in this paper: British Broadcasting Corporation, Glare Wellcome, Microsoft, and Unilever. Discusses key factors leading enterprises to consider the building of corporate taxonomies and describes the uses of taxonomies, principally as, inter alia, a source of authority for tagging, to aid navigation, to support search engines, as knowledge maps, and as a depository of enterprise retrieval languages. Key findings are presented. | [
"information",
"classification",
"literacy",
"information retrieval",
"companies",
"terminology"
] | [
"P",
"U",
"U",
"R",
"U",
"U"
] |
2BZuDCd | Discourse processing for context question answering based on linguistic knowledge | Motivated by the recent effort on scenario-based context question answering (QA), this paper investigates the role of discourse processing and its implication on query expansion for a sequence of questions. Our view is that a question sequence is not random, but rather follows a coherent manner to serve some information goals. Therefore, this sequence of questions can be considered as a mini discourse with some characteristics of discourse cohesion. Understanding such a discourse will help QA systems better interpret questions and retrieve answers. Thus, we examine three models driven by Centering Theory for discourse processing: a reference model that resolves pronoun references for each question, a forward model that makes use of the forward looking centers from previous questions, and a transition model that takes into account the transition state between adjacent questions. Our empirical results indicate that more sophisticated processing based on discourse transitions and centers can significantly improve the performance of document retrieval compared to models that only resolve references. This paper provides a systematic evaluation of these models and discusses their potentials and limitations in processing coherent context questions. | [
"discourse processing",
"context question answering",
"centering theory",
"intelligent user interfaces"
] | [
"P",
"P",
"P",
"U"
] |
16DnnUE | Variables sampling inspection scheme for resubmitted lots based on the process capability index Cpk | This paper attempts to develop a sampling inspection scheme by variables based on process performance index for product acceptance determination, which examines the situation where resampling is permitted on lots not accepted on original inspection. The equations for plan parameters, the required sample size and the corresponding critical value, are derived based on the exact sampling distribution rather than an approximation approach hence the decisions made are more accurate and reliable. Moreover, the efficiency of the proposed variables resubmitted sampling plan is evaluated and compared with the existing variables single sampling plan. For illustrative purpose, an example is presented to demonstrate the use of the derived results for making a decision on product acceptance determination. | [
"quality control",
"acceptance sampling",
"decision making",
"lot resubmission",
"process fraction defectives"
] | [
"U",
"R",
"R",
"M",
"M"
] |
3RC:ZRT | A retrieval method adaptively reducing user's subjective impression gap | As an approach to search/retrieve such objects as pictures, music, perfumes and apparels on the Internet, sensitivity-vectors or kansei-vectors are useful since textual keywords are not sufficient to find objects that users want. The sensitivity-vector is an array of values. Each value indicates a degree of feeling or impression represented by a sensitivity word or kansei word. However, due to the gap between user's subjective sensitivity (impression, image and feeling) degree and the corresponding value in the database. Also, such an approach is not enough to retrieve what users want. This paper proposes a retrieval method to automatically and dynamically reduce such gaps by estimating a subjective criterion deviation (we call "SCD") using the user's retrieval history and fuzzy modeling. Additionally, the proposed method can avoid users' burden caused by conventional methods such as completing required questionnaires. This method can also reflect the dynamic change of user's preference which cannot be accomplished by using questionnaires. For the evaluation, an experiment was performed by building and using a perfume retrieval system. Through observing the transition of the deviation reduction degree, it was clarified that the proposed method is effective. In the experiment, the machine could learn users' subjective criteria deviation as well as its dynamic change caused by factors such as user's preference, if the learning rate is well adjusted. | [
"information retrieval",
"kansei engineering",
"user profile"
] | [
"M",
"M",
"M"
] |
4nk&UbW | Analysis of a Priority Retrial Queue with Dependent Vacation Scheme and Application to Power Saving in Wireless Communication Systems | In this paper, we analyse a priority retrial queue with repeated inhomogeneous vacations. Two types of customers arrive at the system and if they find the server unavailable, the first type join an ordinary queue, while the second have to reattempt after a random period. The server departs for a vacation when the ordinary queue is empty upon a service completion. Under this scheme, when a vacation period expires, the server wakes up. If the priority queue is non-empty, it starts serving it exhaustively. Otherwise, it remains awake for a limited time period, waiting for a possible request. If no customers arrive during this period, it goes for another vacation with a different probability distribution from the previous one. The theoretical model has application to the power-saving mechanism in wireless communication systems in which the sleep duration of the device corresponds to the vacation of the server and the listening duration to the time-limited idle period. Various system performance and energy metrics are derived. Moreover, we compare the proposed policy with other vacation schemes. Optimization problems are formulated for finding best values for the critical parameters of the model which maximize the economy of energy achieved in the power save mode, subject to performance constraints. | [
"priorities",
"retrial queue",
"dependent vacation",
"wireless communication systems",
"sleep mode",
"energy saving",
"time limited idle period"
] | [
"P",
"P",
"P",
"P",
"R",
"R",
"R"
] |
23KQMGN | Pulsed laser induced microbubble in gold nanorod colloid | The pulsed laser induced microbubble in gold nanorod colloid was studied. FaradayTyndall effect of GNR colloid with the concentration of GNR is characterized. Plasmonic light scattering is severe at the longitudinal surface plasmon resonance. Divergence angle of laser through gold colloid increases as gold concentration rises. | [
"pulsed laser induced microbubble",
"gold nanorod",
"faradaytyndall effect",
"surface plasmon resonance",
"optical breakdown",
"photoacoustic"
] | [
"P",
"P",
"P",
"P",
"U",
"U"
] |
Qxrr-t4 | Cross-language information retrieval models based on latent topic models trained with document-aligned comparable corpora | In this paper, we study different applications of cross-language latent topic models trained on comparable corpora. The first focus lies on the task of cross-language information retrieval (CLIR). The Bilingual Latent Dirichletallocation model (BiLDA) allows us to create an interlingual, language-independent representation of both queries and documents. We construct several BiLDA-based document models for CLIR, where no additional translation resources are used. The second focus lies on the methods for extracting translation candidates and semantically related words using only per-topic word distributions of the cross-language latent topic model. As the main contribution, we combine the two former steps, blending the evidences from the per-document topic distributions and the per-topic word distributions of the topic model with the knowledge from the extracted lexicon. We design and evaluate the novel evidence-rich statistical model for CLIR, and prove that such a model, which combines various (only internal) evidences, obtains the best scores for experiments performed on the standard test collections of the CLEF 20012003 campaigns. We confirm these findings in an alternative evaluation, where we automatically generate queries and perform the known-item search on a test subset of Wikipedia articles. The main importance of this work lies in the fact that we train translation resources from comparable document-aligned corpora and provide novel CLIR statistical models that exhaustively exploit as many cross-lingual clues as possible in the quest for better CLIR results, without use of any additional external resources such as parallel corpora or machine-readable dictionaries. | [
"cross-language information retrieval",
"unsupervised cross-language lexicon extraction",
"probabilistic latent topic models",
"evidence-rich retrieval models"
] | [
"P",
"M",
"M",
"R"
] |
2Yn1kQ7 | foundations of the c++ concurrency memory model | Currently multi-threaded C or C++ programs combine a single-threaded programming language with a separate threads library. This is not entirely sound [7]. We describe an effort, currently nearing completion, to address these issues by explicitly providing semantics for threads in the next revision of the C++ standard. Our approach is similar to that recently followed by Java [25], in that, at least for a well-defined and interesting subset of the language, we give sequentially consistent semantics to programs that do not contain data races. Nonetheless, a number of our decisions are often surprising even to those familiar with the Java effort: We (mostly) insist on sequential consistency for race-free programs, in spite of implementation issues that came to light after the Java work. We give no semantics to programs with data races. There are no benign C++ data races. We use weaker semantics for trylock than existing languages or libraries, allowing us to promise sequential consistency with an intuitive race definition, even for programs with trylock. This paper describes the simple model we would like to be able to provide for C++ threads programmers, and explain how this, together with some practical, but often under-appreciated implementation constraints, drives us towards the above decisions. | [
"c++",
"concurrency",
"memory model",
"model",
"multi-threads",
"thread",
"program",
"programming language",
"language",
"libraries",
"sound",
"completeness",
"addressing",
"semantic",
"standardization",
"sequential consistency",
"consistency",
"data race",
"implementation",
"lighting",
"use",
"trylock",
"definition",
"paper",
"programmer",
"practical",
"constraint",
"memory consistency"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"R"
] |
1KusFCM | Cluster based MPSoC architecture: an on-chip message passing implementation | This paper proposes a hardware memory management unit to implement an on-chip message passing protocol for cluster based multi-processors system on chip architectures. Within the architecture each cluster is composed of general purpose processors or digital signal processors, along with a memory. To maintain the coherence of the memory a hardware memory management unit is added in the cluster to increase the performance and support an on-chip message passing communication. The hardware memory management unit has the capacity to allocate, control and limit the access to the memory. In order to show the benefit of our architecture a performance comparison over a classical flat architecture and against a state of the art architecture is driven. The results show an improvement that ranges from 1.2 to 21.77% over these two architectural models. Finally the hardware cost overhead is studied. | [
"cluster",
"mpsoc",
"hardware memory management unit",
"shared memory",
"distributed memory"
] | [
"P",
"P",
"P",
"M",
"M"
] |
4XKMNHw | IGBT gate driver IC with full-bridge output stage using a modified standard CMOS process | This paper discusses the benefits of a full-bridge output stage on integrated IGBT gate drive circuits. This full-bridge topology allows obtaining positive and negative gate voltages using a single floating power supply. Short circuit protections have also been integrated, implementing an original soft shutdown process after an IGBT short circuit fault. The monolithic integration is based on an innovative high-voltage CMOS technology for power integrated circuits, using a standard low cost CMOS technology, requiring only one extra processing step. Lateral power N- and P-MOS transistors have been optimized using 2D simulators attending both specific on-resistance and breakdown voltage in order to optimize the full-bridge output stage. The IGBT driver has been experimentally tested, producing 15 V gate-to-emitter voltage, and supplying the current peaks required by the 600 V IGBT switching processes. The driver characteristic response times are adapted to work at high switching frequency (>25 kHz) with high value of capacitive loads (3.7nF). | [
"cmos",
"power integrated circuit",
"igbt driver",
"ldmos"
] | [
"P",
"P",
"P",
"U"
] |
2e3hobQ | Multiple solutions for the BrezisNirenberg problem with a Hardy potential and singular coefficients ? | By energy estimates and by establishing a local (PS) condition, we establish the multiplicity of solutions to a class of semi-linear BrezisNirenberg type problems with a Hardy term and singular coefficients via the pseudo-index theory. | [
"multiple solutions",
"brezisnirenberg problem",
"singular coefficients",
"pseudo-index",
"critical sobolevhardy exponents"
] | [
"P",
"P",
"P",
"P",
"U"
] |
2JWS8jF | Motion detection and tracking using belief indicators for an automatic visual-surveillance system | A motion detection and tracking algorithm for human and car activity surveillance is presented and evaluated by using the Pets'2000 test sequence. The proposed approach uses a temporal fusion strategy by using the history of events in order to improve instantaneous decisions. Normalized indicators updated at each frame summarize the history of specific events. For the motion detection stage, a fast updating algorithm of the background reference is proposed. The control of the updating at each pixel is based on a stability indicator estimated from inter-frame variations. The tracking algorithm uses a region-based approach. A belief indicator representing the tracking consistency for each object allows solving defined ambiguities at the tracking level. A second specific tracking indicator representing the identity quality of each tracked object is updated by integrating object interaction. Tracking indicators permit to propagate uncertainties on higher levels of the interpretation and are directly useful in the tracking performance evaluation. | [
"motion detection",
"tracking",
"belief indicator",
"video-surveillance",
"belief updating",
"uncertainty management"
] | [
"P",
"P",
"P",
"U",
"R",
"M"
] |
-hGnNFQ | Support for partial run-time reconfiguration of platform FPGAs | Run-time partial reconfiguration of programmable hardware devices can be applied to enhance many applications in high-end embedded systems, particularly those that employ recent platform FPGAs. The effective use of this approach is often hampered by the complexity added to the system development process and by limited tool support. The paper is concerned with several aspects related to the effective exploitation of run-time partial reconfiguration, with particular emphasis on the generation of partial configurations and the run-time utilisation of the reconfigurable resources. The paper presents an approach inspired by the traditional software development: partial configurations are produced by assembling components from a previously created library, thus enabling the embedded application developer to produce the configuration data required for run-time modifications with less effort than is needed with the conventional design flow. A tool that supports this approach is also described. A second set of issues is addressed by a run-time support library that provides facilities for managing the hardware reconfiguration process and the communication with the reconfigured circuits. The use of run-time partial reconfiguration requires a high level of system support. The paper describes one possible approach, presenting a demonstration system developed to support the present work and characterising its performance. In order to clarify the advantages of the approach to run-time reconfigiaration discussed in the paper, two small case studies are described, the first on the use of dedicated datapaths for subword operations and the second on two-dimensional pattern-matching for bilevel images. Timing measurements for both cases are included. (c) 2006 Elsevier B.V. All rights reserved. | [
"run-time reconfiguration",
"platform fpga",
"partial reconfiguration",
"bitstream manipulation",
"run-time support system"
] | [
"P",
"P",
"P",
"U",
"R"
] |
1PHBZsa | A note on the modelling of project networks with time constraints | In this note we show how to manage, by means of Generalized Precedence Relationships, some kinds of temporal constraints introduced by Chen et al. (1997) in project networks. | [
"project network",
"time constraints",
"project scheduling"
] | [
"P",
"P",
"M"
] |
-VSiqNu | R: a data analysis and statistical programming environmentan emerging tool for the geosciences | A statistical computation, programming and graphics language is available at no cost under the GNU Public Licence. This programming environment is available in source code and binary code forms and is supported for a number of operating systems. The R language is similar to the S language and is a package that allows users to apply methods of data analysis, statistics and graphical rendering of results in an interactive windows-based environment. The R language is supported by a large development and user's group. Additional data analysis and statistical libraries have been contributed by many users and can be obtained from the Comprehensive R Archive Network (CRAN). | [
"graphics",
"rstatistical computing language",
"object-oriented programming environment",
"gnu licensing"
] | [
"P",
"M",
"M",
"M"
] |
PZy8xqc | The fundamentals of thermal-mass diffusion analogy | Correspondence between temperature and fractional saturation (w=C/Csat) ( w = C / C sat ) is fundamentally established. The validity of the correspondence is independent of the linearity of the sorption isotherm. Applicable to solute of any phase and absorbents that are a combination of gas, liquid, and solid. Detailed explanations provided for diffusion under complex environments. E.g. temporally varying and spatially non-uniform temperature and source. | [
"diffusion",
"fractional saturation",
"concentration",
"moisture",
"wetness",
"chemical potential"
] | [
"P",
"P",
"U",
"U",
"U",
"U"
] |
4n2kUBd | Microarray data classification using inductive logic programming and gene ontology background information | There exists many databases containing information on genes that are useful for background information in machine learning analysis of microarray data. The gene ontology and gene ontology annotation projects are among the most comprehensive of these. We demonstrate how inductive logic programming (ILP) can be used to build classification rules for microarray data which naturally incorporates the gene ontology and annotations to it as background knowledge without removing the inherent graph structure of the ontology. The ILP rules generated are parsimonious and easy to interpret. Copyright (C) 2010 John Wiley & Sons, Ltd. | [
"microarray",
"inductive logic programming",
"gene ontology",
"bioinformatics"
] | [
"P",
"P",
"P",
"U"
] |
56qeH7d | File replication, maintenance, and consistency management services in data grids | Data replication and consistency refer to the same data being stored in distributed sites, and kept consistent when one or more copies are modified. A good file maintenance and consistency strategy can reduce file access times and access latencies, and increase download speeds, thus reducing overall computing times. In this paper, we propose dynamic services for replicating and maintaining data in grid environments, and directing replicas to appropriate locations for use. To address a problem with the Bandwidth Hierarchy-based Replication (BHR) algorithm, a strategy for maintaining replicas dynamically, we propose the Dynamic Maintenance Service (DMS). We also propose a One-way Replica Consistency Service (ORCS) for data grid environments, a positive approach to resolving consistency maintenance issues we hope will strike a balance between improving data access performance and replica consistency. Experimental results show that our services are more efficient than other strategies. | [
"file replication",
"consistency management",
"data grids",
"dynamic maintenance"
] | [
"P",
"P",
"P",
"P"
] |
-a13yr& | Busy period analysis for M/PH/1 queues with workload dependent balking | We consider an M/PH/1 queue with workload-dependent balking. An arriving customer joins the queue and stays until served if and only if the system workload is no more than a fixed level at the time of his arrival. We begin by considering a fluid model where the buffer content changes at a rate determined by an external stochastic process with finite state space. We derive systems of first-order linear differential equations for the mean and LST (Laplace-Stieltjes Transform) of the busy period in this model and solve them explicitly. We obtain the mean and LST of the busy period in the M/PH/1 queue with workload-dependent balking as a special limiting case of this fluid model. We illustrate the results with numerical examples. | [
"busy period",
"m/ph/1 queue",
"balking",
"fluid model",
"workload process"
] | [
"P",
"P",
"P",
"P",
"R"
] |
58NrKJw | Inductivism and Parmenidean epistemology: Kyburg's way | According to Henry Kyburg, all extralogical and extramathematical propositions accepted as evidence and all propositions accepted inductively on the basis of such evidence are uncertain. There is a possibility of error. Consequently, neither the corpus of inductively accepted statements nor the corpus of statements accepted as evidence can serve as a standard for serious possibility in the sense I have deployed since the 1970s. The standard for serious possibility remains an unchanging Parmenidean standard. In contrast to other Parmenidean epistemologists that eschew inductive acceptances Kyburg insists that the corpus of evidence and of inductively accepted statements is subject to critical review and change; but the changes have no bearing on the standard for serious possibility. I have always agreed with Henry's emphasis on a distinction between acceptance as evidence and inductive acceptance. But I have insisted that the corpus of evidence or state of full belief is a standard for serious possibility and that the standard is subject to modification. Kyburg does think of acceptance as evidence and inductive acceptance as modal notions and has recently used the expression "serious possibility" in this connection. But when Kyburg and Teng speak of "risky knowledge", they are speaking of claims that might be false in the sense of serious possibility that they seem to be suggesting is immune to change and seems to correlate with serious possibility as I have used it since the 19705. So acceptance (both inductive and evidential) are modal notions subject to change but are not to be confused with the notion of serious possibility of error or riskiness. (C) 2010 Elsevier Inc. All rights reserved. | [
"parmenidean epistemology",
"acceptance as evidence",
"inductive acceptance",
"serious possibility",
"inductive rejection"
] | [
"P",
"P",
"P",
"P",
"M"
] |
4KThqX- | Maturing of OpenFlow and Software-defined Networking through deployments | Software-defined Networking (SDN) has emerged as a new paradigm of networking that enables network operators, owners, vendors, and even third parties to innovate and create new capabilities at a faster pace. The SDN paradigm shows potential for all domains of use, including data centers, cellular providers, service providers, enterprises, and homes. Over a three-year period, we deployed SDN technology at our campus and at several other campuses nation-wide with the help of partners. These deployments included the first-ever SDN prototype in a lab for a (small) global deployment. The four-phased deployments and demonstration of new networking capabilities enabled by SDN played an important role in maturing SDN and its ecosystem. We share our experiences and lessons learned that have to do with demonstration of SDNs potential; its influence on successive versions of OpenFlow specification; evolution of SDN architecture; performance of SDN and various components; and growing the ecosystem. | [
"openflow",
"deployments",
"sdn",
"experience",
"geni"
] | [
"P",
"P",
"P",
"P",
"U"
] |
34nH72E | Automatic detection of epileptic seizures on the intra-cranial electroencephalogram of rats using reservoir computing | In this paper we propose a technique based on reservoir computing (RC) to mark epileptic seizures on the intra-cranial electroencephalogram (EEG) of rats. RC is a recurrent neural networks training technique which has been shown to possess good generalization properties with limited training. The system is evaluated on data containing two different seizure types: absence seizures from genetic absence epilepsy rats from Strasbourg (GAERS) and tonicclonic seizures from kainate-induced temporal-lobe epilepsy rats. The dataset consists of 452hours from 23 GAERS and 982hours from 15 kainate-induced temporal-lobe epilepsy rats. During the preprocessing stage, several features are extracted from the EEG. A feature selection algorithm selects the best features, which are then presented as input to the RC-based classification algorithm. To classify the output of this algorithm a two-threshold technique is used. This technique is compared with other state-of-the-art techniques. A balanced error rate (BER) of 3.7% and 3.5% was achieved on the data from GAERS and kainate rats, respectively. This resulted in a sensitivity of 96% and 94% and a specificity of 96% and 99% respectively. The state-of-the-art technique for GAERS achieved a BER of 4%, whereas the best technique to detect tonicclonic seizures achieved a BER of 16%. Our method outperforms up-to-date techniques and only a few parameters need to be optimized on a limited training set. It is therefore suited as an automatic aid for epilepsy researchers and is able to eliminate the tedious manual review and annotation of EEG. | [
"reservoir computing",
"neural networks",
"eeg classification",
"automatic seizure detection",
"experimental animal models for epilepsy"
] | [
"P",
"P",
"R",
"R",
"M"
] |
36NzvAg | Fuzzy economic production for production inventory | A production cycle is defined using both production and sale, for which to a certain point the production stops until all inventories are sold out. For the planning period of T days, the function of total cost is F(q) where q represents the production quantity of each cycle. The best production quantity in the Crisp sense is qi. Fuzzification of q changes to fuzzy number (Q) over tilde; then, how to determine the best production quantity in the light of (Q) over tilde is the subject of this paper. Suppose the membership function of (Q) over tilde is a trapezoidal fuzzy number set (q(1),q(2),q(3),q(4)) satisfying the condition of 0<q(1) <q2<q3<q4, the membership function of fuzzy cost F((Q) over tilde) is mu(F((Q) over tilde))(z): and its centroid, which is thought to be the estimated total cost and minimum for the condition of 0<q(1)(*)<q(2)(*)<q(3)(*)<q(4)(*). From trapezoidal fuzzy number set (q(1)(*),q(2)(*),q(3)(*),q(4)(*)) find out its centroid as the best production quantity. (C) 2000 Elsevier Science B.V. All rights reserved. | [
"economics production",
"membership functions",
"extension principle",
"fuzzy production inventory",
"fuzzy production quantity"
] | [
"P",
"P",
"U",
"R",
"R"
] |
2TcMRvX | Exploring cDNA Phage Display for Autoantibody Profiling in the Serum of Multiple Sclerosis Patients | We applied a cDNA phage display method called serological antigen selection (SAS) to identify immunogenic targets that evoke an autoantibody response in the serum of multiple sclerosis (MS) patients. This method involves the display of a cDNA expression library, in this study a MS brain library, on filamentous phage and subsequent selection using patient immunoglobulin G (IgG). To apply the SAS technology for autoantibodies in the serum of MS patients, an optimization was necessary to deplete cDNA products that encode IgG fragments derived from B cells present in the MS brain plaques. We describe a differential screening procedure in which positive selection rounds on MS serum and negative selection rounds on healthy control serum were alternated to optimize the selection procedure. As a result, a substantial decrease of IgG-displaying phage clones was observed after each negative selection round, thereby preventing an overgrowth of IgG-displaying phage clones. Our depletion strategy was therefore successful in preventing the enrichment of IgG-displaying phage clones. This approach will facilitate the identification of possible MS-related antigens | [
"multiple sclerosis",
"serological antigen selection",
"filamentous phage",
"cdna display",
"autoantibody repertoire",
"autoimmune disease"
] | [
"P",
"P",
"P",
"R",
"M",
"U"
] |
3suTEKV | Local diagnosability of generic star-pyramid graph | The problem of fault diagnosis in network has been discussed widely. In this paper, we study the local diagnosability of a generic star-pyramid graph. We prove that under the PMC model the local diagnosability of each vertex in a generic star-pyramid graph is equal to its degree and the generic star-pyramid has the strong local diagnosability property. Then we study the local diagnosability of a faulty graph. After showing some properties of the graph, we prove that a generic star-pyramid graph keeps the strong property no matter how many edges are faulty under the condition that each vertex is incident with at least four fault-free edges. Crown Copyright (C) 2009 Published by Elsevier B.V. All rights reserved. | [
"generic star-pyramid graph",
"pmc model",
"strong local diagnosability",
"interconnection networks"
] | [
"P",
"P",
"P",
"M"
] |
178s2p3 | Dynamical sources in information theory: Fundamental intervals and word prefixes | A quite general model of source that comes from dynamical systems theory is introduced. Within this model, some basic problems of algorithmic information theory contexts are analysed. The main tool is a new object, the generalized Ruelle operator, which can be viewed as a "generating" operator for fundamental intervals (associated to information sharing common prefixes). Its dominant spectral objects are linked with important parameters of the source, such as the entropy, and play a central role in all the results. | [
"sources",
"information theory",
"fundamental intervals",
"dynamical systems",
"entropy",
"transfer operator"
] | [
"P",
"P",
"P",
"P",
"P",
"M"
] |
oc:gH51 | On b b -colorings in regular graphs ? | A b b -coloring is a coloring of the vertices of a graph such that each color class contains a vertex that has a neighbor in all other color classes. El-Sahili and Kouider have conjectured that every d d -regular graph with girth at least 5 has a b b -coloring with d+1 d + 1 colors. We show that the Petersen graph infirms this conjecture, and we propose a new formulation of this question and give a positive answer for small degree. | [
"coloration",
"b b -coloring b b b b b",
"a a -chromatic number a a a a a",
"b b -chromatic number b b b b b"
] | [
"P",
"R",
"M",
"M"
] |
25pRh-s | Tracking birth of vortex in flows | In order to avoid undesired effects from vortices in many industrial processes, it is important to know the set of operating parameters at which the flow does not have recirculation. The map of these conditions in the parameter space is called vortex-free operating window. Here, we propose an efficient way to construct such window automatically without expensively checking every possible flow states. The proposed technique is based on tracking a path in the parameter space at which the local kinematic condition at a stagnation point for vortex birth is satisfied. This multiparameter continuation is performed by solving an augmented NavierStokes system. In the augmented system, the birth condition and the governing equations was represented in Galerkins finite element context. We used the proposed method in two important coating flows with free surfaces: single-layer slot coating and forward roll coating. | [
"birth of vortex",
"multiparameter continuation",
"coating flow",
"automated flow feature tracking",
"flow topology",
"galerkin finite element method"
] | [
"P",
"P",
"P",
"M",
"M",
"R"
] |
21NMGAj | Convergence of the variable two-step BDF time discretisation of nonlinear evolution problems governed by a monotone potential operator | The initial-value problem for a first-order evolution equation is discretised in time by means of the two-step backward differentiation formula (BDF) on a variable time grid. The evolution equation is governed by a monotone and coercive potential operator. On a suitable sequence of time grids, the piecewise constant interpolation and a piecewise linear prolongation of the time discrete solution are shown to converge towards the weak solution if the ratios of adjacent step sizes are close to 1 and do not vary too much. | [
"convergence",
"time discretisation",
"nonlinear evolution problem",
"monotone potential operator",
"backward differentiation formula",
"linear multistep method",
"non-uniform time grid"
] | [
"P",
"P",
"P",
"P",
"P",
"M",
"M"
] |
-a9pbCV | Design a deblocking filter with three separate modes in DCT-based coding | The reconstructed images from highly compressed data have noticeable image degradations, such as blocking artifacts near the block boundaries. Post-processing appears to be the most feasible solution because it does not require any existing standards to be changed. Markedly reducing blocking effects can increase compression ratios for a particular image quality or improve the quality of equally compressed images. In this work, a novel deblocking algorithm is proposed based on three filtering modes in terms of the activity across block boundaries. By properly considering the masking effect of the HVS (Human Visual System), an adaptive filtering decision is integrated into the deblocking process. According to three different deblocking modes appropriate for local regions with different characteristics, the perceptual and objective quality are improved without excessive smoothing the image details or insufficiently reducing the strong blocking effect on a flat region. According to the simulation results, the proposed method outperforms other deblocking algorithms in respect to PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural SIMilarity). | [
"deblocking filter",
"post-processing",
"blocking effects",
"hvs",
"objective quality",
"psnr",
"ssim",
"image compression",
"dct",
"perceptual quality"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"R",
"U",
"R"
] |
15id3FA | IT/IS implementation risks and their impact on firm performance | There has been considerable theoretical work on the role of information systems (IS) in creating competitive advantage and enhancing organizational performance. The literature identifies a consistent lack of success by organizations in achieving business benefits from their IS investments and in particular their difficulties in obtaining a sustainable competitive advantage. A great deal of debate appears to exist nowadays related to the participation of information technology (IT) risks to organizational performance. Previous research has dealt with the examination of the existing relationships between the implemented information technology and firm's performance variables. This research focuses on the IT impact on firm's non-financial IT risk. The research was conducted using questionnaires that were sent to world's five hundred largest corporations as they were published in the fortune magazine (European edition, No. 14, 2003) and to Greek companies. The results indicate that IT risk factors affect mainly coordination and partially information ability but not productivity. Furthermore, the most significant risk factors affecting business performance are management ability, information integrity, controllability and exclusivity. | [
"it risk factors",
"business performance",
"it evaluation"
] | [
"P",
"P",
"M"
] |
1btUvxH | Vibration analysis of spherical structural elements using the GDQ method | This paper deals with the dynamical behaviour of hemispherical domes and spherical shell panels. The First-order Shear Deformation Theory (FSDT) is used to analyze the above moderately thick structural elements. The treatment is conducted within the theory of linear elasticity, when the material behaviour is assumed to be homogeneous and isotropic. The governing equations of motion, written in terms of internal resultants, are expressed as functions of five kinematic parameters, by using the constitutive and the congruence relationships. The boundary conditions considered are clamped (C), simply supported (S) and free (F) edge. Numerical solutions have been computed by means of the technique known as the Generalized Differential Quadrature (GDQ) Method. These results, which are based upon the FSDT, are compared with the ones obtained using commercial programs such as Abaqus, Ansys, Femap/Nastran, Straus, Pro/Engineer, which also elaborate a three-dimensional analysis. The effect of different grid point distributions on the convergence, the stability and the accuracy of the GDQ procedure is investigated. The convergence rate of the natural frequencies is shown to be fast and the stability of the numerical methodology is very good. The accuracy of the method is sensitive to the number of sampling points used, to their distribution and to the boundary conditions. | [
"gdq method",
"spherical shell panels",
"differential quadrature",
"sampling point distribution",
"free vibrations"
] | [
"P",
"P",
"P",
"R",
"R"
] |
1qY4hrG | An information hiding algorithm based on intra-prediction modes and matrix coding for H.264/AVC video stream | An information hiding algorithm based on intra-prediction modes and matrix coding is proposed for H.264/AVC video stream. It utilizes the block types and modes of intra-coded blocks to embed watermark. intra-44 4 4 coded blocks (I4-blocks) are divided into groups and two watermark bits are mapped to every three I4-blocks by matrix coding to map between watermark bit and intra-prediction modes. Since only the mode of an I4-block is changed for every two watermark bits, it can guarantee a high PSNR and slight bitrate increase after watermark embedding. Moreover, embedding position template is utilized to select candidate I4-blocks for watermark embedding, which further enhances the security of watermark information. Experimental results on several test sequences demonstrate that the proposed approach can realize blind extraction with real-time performance. | [
"intra-prediction mode",
"h.264/avc",
"video watermarking",
"compressed domain"
] | [
"P",
"P",
"R",
"U"
] |
2HKmsg5 | Ontology-supported FAQ processing and ranking techniques | This paper describes an FAQ system on the Personal Computer (PC) domain, which employs ontology as the key technique to pre-process FAQs and process user query. It is also equipped with an enhanced ranking technique to present retrieved, query-relevant results. Basically, the system bases on the wrapper technique to help clean, retrieve, and transform FAQ information collected from a heterogeneous environment and stores it in an ontological database. During retrieval of FAQs, the system trims irrelevant query keywords, employs either full keywords match or partial keywords match to retrieve FAQs, and removes conflicting FAQs before turning the final results to the user. Ontology plays the key role in all the above activities. To produce a more effective presentation of the search results, the system employs an enhanced ranking technique, which includes Appearance Probability, Satisfaction Value, Compatibility Value, and Statistic Similarity Value as four measures properly weighted to rank the FAQs. Our experiments show the system does improve precision rate and produces better ranking results. The proposed FAQ system manifests the following interesting features. First, the ontology-supported FAQ extraction from webpages can clean FAQ information by removing redundant data, restore missing data, and resolve inconsistent data. Second, the FAQs are stored in an ontology-directed internal format, which supports semantics-constrained retrieval of FAQs. Third, the ontology-supported natural language processing of user query helps pinpoint user's intent. Finally, the partial keywords match-based ranking method helps present user-most-wanted, conflict-free FAQ solutions for the user. | [
"ontology",
"faq",
"ranking method"
] | [
"P",
"P",
"P"
] |
3cFTTEt | Assessing pipe failure rate and mechanical reliability of water distribution networks using data-driven modeling | In this paper two models are presented based on Data-Driven Modeling (DDM) techniques (Artificial Neural Network and neuro-fuzzy systems) for more comprehensive and more accurate prediction of the pipe failure rate and an improved assessment of the reliability of pipes. Furthermore, a multivariate regression approach has been developed to enable comparison with the DDM-based methods. Unlike the existing simple regression models for prediction of pipe failure rates in which only few factors of diameter, age and length of pipes are considered, in this paper other parameters such as pressure and pipe depth, are also included. Furthermore, an investigation is carried out on most commonly used mechanical reliability relationships and the results of incorporation of the proposed pipe failure models in the reliability index are compared. The proposed models are applied to a real case study involving a large water distribution network in Iran and the results of model predictions are compared with measured pipe failure data. Compared with the results of neuro-fuzzy and multivariate regression models, the outcomes of the artificial neural network model are more realistic and accurate in the prediction of pipe failure rates and evaluation of mechanical reliability in water distribution networks. | [
"pipe failure rate",
"mechanical reliability",
"water distribution networks",
"artificial neural network",
"neuro-fuzzy system",
"multivariate regression"
] | [
"P",
"P",
"P",
"P",
"P",
"P"
] |
21-zqWN | Supporting large-scale travel surveys with smartphones A practical approach | A novel approach to utilizing smartphones for travel surveys is proposed. Trips are automatically reconstructed by simply carrying a smartphone. Accelerometer data is combined with GPS speed and location information. Best classification results were achieved for detecting walk, bus and bike trips. | [
"travel survey",
"smartphones",
"accelerometer",
"gps",
"mobility data",
"transport modes",
"mode detection"
] | [
"P",
"P",
"P",
"P",
"M",
"U",
"M"
] |
1U2jBt- | the system of common algorithmic space to create visual models of phenomena and processes | This paper's main result is presenting a new system of tools for modeling and visualization of phenomena and processes. The generalized effective parallel-and-recursive algorithm is the basis of the system. It solves in unified manner the variety of interrelated geometrical problems for the construction of visual models. Implementation of the algorithm presented in the example of the problem modeling of thermo-physical processes in the welded plates. | [
"parallel-and-recursive algorithm",
"thermo-physical processes",
"system of modeling tools"
] | [
"P",
"P",
"R"
] |
3iKTqF3 | Modeling and numerical approximation of a 2.5D set of equations for mesoscale atmospheric processes | The set of 3D inviscid primitive equations for the atmosphere is dimensionally reduced by a Discontinuous Galerkin discretization in one horizontal direction. The resulting model is a 2D system of balance laws with a source term depending on the layering procedure and the choice of coupling fluxes, which is established in terms of upwind considerations. The 2.5D system is discretized via a WENOTVD scheme based in a flux-limiter approach. We study four tests cases related to atmospheric phenomena to analyze the physical validity of the model. | [
"primitive equations",
"discontinuous galerkin",
"layering",
"wenotvd schemes",
"upwind flux",
"test cases for dynamical cores"
] | [
"P",
"P",
"P",
"P",
"R",
"M"
] |
46SnNLu | Characterizations of recognizable picture series | The theory of two-dimensional languages as a generalization of formal string languages was motivated by problems arising from image processing and pattern recognition, and also concerns models of parallel computing. Here we investigate power series on pictures. These are functions that map pictures to elements of a semiring and provide an extension of two-dimensional languages to a quantitative setting. We assign weights to different devices, ranging from picture automata to tiling systems. We will prove that, for commutative semirings, the behaviours of weighted picture automata are precisely alphabetic projections of series defined in terms of rational operations, and also coincide with the families of series characterized by weighted tiling or weighted domino systems. (C) 2007 Elsevier B. V. All rights reserved. | [
"picture series",
"two-dimensional languages",
"automata",
"unambiguity"
] | [
"P",
"P",
"P",
"U"
] |
3SK:dtf | automation is a breeze with autoit | AutoIt is a free scripting language for Microsoft Windows that simulates Windows commands, mouse movements, and mouse-clicks; sends keystrokes to applications; and works with the clipboard to cut and paste text, among other tasks. Unlike many other scripting languages, AutoIt is able to interact with programs the same way that your users do - by actually using the mouse and keyboard shortcuts. The Solution Center has created several automated programs using AutoIt that perform many of the tasks repeated each day. Saving time, reducing error, and adding ease to report options are just some of the ways the Solution Center at Iowa State University has exploited AutoIt's features. AutoIt has allowed us to better serve our students and faculty members by reducing the amount of time it takes to complete a scripted task, removing human error, and eliminating some repetitive tasks for staff. This session is intended for all levels of user support staff. | [
"automation",
"autoit",
"scripting",
"windows",
"customer service",
"help desk"
] | [
"P",
"P",
"P",
"P",
"U",
"U"
] |
3ugsb8C | ENERGY-AWARE DISCRETE PROBABILISTIC LOCALIZATION OF WIRELESS SENSOR NETWORKS | Localizing sensor nodes is critical in the context of wireless sensor network applications. It has been shown that, for some applications, low-overhead discrete localization achieves results comparable to costly fine localization. This research presents a hybrid energy-aware discrete localization method that requires no transmission overhead from the sensor nodes. The proposed method, E-KalmaNN, is a combination of a Kalman-inspired localization and Artificial Neural Networks estimation that updates the position of a node with respect to a mobile reference. E-KalmaNN runs on the sensor nodes and supports different listening/wakeup frequencies for different nodes to balance power requirements with localization accuracy for each node. Simulation results show that the method converges to the correct position of the node in a relatively short time with high average location accuracy. Compared to the localization methods found in the literature, E-KalmaNN localizes with comparable accuracy, lower transmission costs and/or fewer motion restrictions. | [
"discrete localization",
"energy-aware robotics and sensor networks"
] | [
"P",
"M"
] |
4JNQtJ9 | An improved image analogy method based on adaptive CUDA-accelerated neighborhood matching framework | The image analogy framework is especially useful to synthesize appealing images for non-homogeneous input and gives users creative control over the synthesized results. However, the traditional framework did not adaptively employ the searching strategy based on neighborhoods different textural contents. Besides, the synthesis speed is slow due to intensive computation involved in neighborhood matching. In this paper we present a CUDA-based neighborhood matching algorithm for image analogy. Our algorithm adaptively applies the global search of the exact L 2 nearest neighbor and k-coherence search strategies during synthesis according to different textural features of images, which is especially usefully for non-homogeneous textures. To consistently implement the above two search strategies on GPU, we adopt the fast k nearest neighbor searching algorithm based on CUDA. Such an acceleration greatly reduces the time of the pre-process of k-coherence search and the synthesis procedure of the global search, which makes possible the adjustment of important synthesis parameters. We further adopt synthesis magnification to get the final high-resolution synthesis image for running efficiency. Experimental results show that our algorithm is suitable for various applications of the image analogy framework and takes full advantage of GPUs parallel processing capability to improve synthesis speed and get satisfactory synthesis results. | [
"image analogy",
"cuda",
"synthesis magnification",
"texture synthesis",
"parallel optimization"
] | [
"P",
"P",
"P",
"R",
"M"
] |
3znsZY6 | Group opinion aggregation based on a grading process: A method for constructing triangular fuzzy numbers | This paper proposes a novel method to derive the collective opinion of a group of members as expressed in a grading process in which individual group members evaluate objects or events by assigning numerical scores. The collective opinions are represented using triangular fuzzy numbers whose construction is based on the possibility distribution of the grading process. The mode and spreads of the fuzzy number are estimated using a weight determination technique. The usefulness of the proposed approach is demonstrated in a group decision-making problem involving multiple evaluation criteria. The results demonstrate that the fuzzy number construction method provides a better representation of the group preference than traditional methods. (C) 2004 Elsevier Ltd. All rights reserved. | [
"triangular fuzzy number",
"possibility distribution",
"group decision-making",
"fuzzy number construction",
"fuzzy multiple attribute decision-making"
] | [
"P",
"P",
"P",
"P",
"M"
] |
4uspgbQ | Determination of swing curve shifts as a function of illumination conditions: Impact on the CD uniformity | The use of a wide range of optical illumination settings may generate significant shifts in the photoresist (PR) swing curves. For technologies where critical dimension controls of few nanometers are required, the impact of these shifts on the critical dimension uniformity must be taken into account. In this paper, we have reproduced and quantified the shift of different swing curves (dose-to-clear and critical dimension) with a 248nm positive PR. The impact of such shift on critical dimension uniformity has been determined on Deep UV Scanner. | [
"swing curve",
"cd uniformity",
"duv positive photoresist"
] | [
"P",
"P",
"M"
] |
-UnHxJj | From crispness to fuzziness: Three algorithms for soft sequential pattern mining | Most real world databases consist of historical and numerical data such as sensor, scientific or even demographic data. In this context, classical algorithms extracting sequential patterns, which are well adapted to the temporal aspect of data, do not allow numerical information processing. Therefore, the data are pre-processed to be transformed into a binary representation, which leads to a loss of information. Fuzzy algorithms have been proposed to process numerical data using intervals, particularly fuzzy intervals, but none of these methods is satisfactory. Therefore this paper completely defines the concepts linked to fuzzy sequential pattern mining. Using different fuzzification levels, we propose three methods to mine fuzzy sequential patterns and detail the resulting algorithms (SPEEDYFUZZY, MINIFUZZY, and TOTALLYFUZZY). Finally, we assess them through different experiments, thus revealing the robustness and the relevancy of this work. | [
"sequential patterns",
"numerical data",
"fuzzy intervals"
] | [
"P",
"P",
"P"
] |
46Rg-iv | Robustly estimating changes in image appearance | We propose a generalized model of image ''appearance change" in which brightness variation over time is represented as a probabilistic mixture of different causes, We define four generative models of appearance change due to (1) object or camera motion: (2) illumination phenomena: (3) specular reflections: and (4) "iconic changes" which are specific to the objects being viewed. These iconic changes include complex occlusion events and changes in the material properties of the objects. We develop a robust statistical framework for recovering these appearance changes in image sequences. This approach generalizes previous work on optical flow to provide a richer description of image events and more reliable estimates of image motion in the presense of shadows and specular reflections. (C) 2000 Academic Press. | [
"specularities",
"iconic change",
"optical flow",
"mixture models",
"outliers",
"probabilistic models",
"illumination change"
] | [
"P",
"P",
"P",
"R",
"U",
"R",
"R"
] |
3dzTAh: | A new machine condition monitoring method based on likelihood change of a stochastic model | We detect machine defective condition with only normal hidden Markov model. Defect can be precisely detected by observing likelihood change of normal HMM on incoming data. If defect data are available, types of defect can also be identified with defect HMM. Proposed method showed accurate and robust diagnostic performance on various application examples. | [
"machine condition monitoring",
"hidden markov model (hmm)",
"pattern recognition",
"weld monitoring"
] | [
"P",
"M",
"U",
"M"
] |
58614Yd | A coordinate-invariant approach to multiresolution motion analysis | Multiresolution motion analysis has gained considerable research interest as a unified framework to facilitate a variety of motion editing tasks. Within this framework, motion data are represented as a collection of coefficients that form a coarse-to-fine hierarchy. The coefficients at the coarsest level describe the global pattern of a motion signal, while those at fine levels provide details at successively finer resolutions. Due to the inherent nonlinearity of the orientation space, the challenge is to generalize multiresolution representations for motion data that contain orientations as well as positions. Our goal is to develop a multiresolution analysis method that guarantees coordinate-invariance without singularity. To do so, we employ two novel ideas: hierarchical displacement mapping and motion filtering. Hierarchical displacement mapping provides an elegant formulation to describe positions and orientations in a coherent manner. Motion filtering enables us to separate motion details level-by-level to build a multiresolution representation in a coordinate-invariant way. Our representation facilitates multiresolution motion editing through level-wise coefficient manipulation that uniformly addresses issues raised by motion modification, blending, and stitching. (C) 2001 Academic Press. | [
"coordinate-invariance",
"motion editing",
"multiresolution analysis",
"hierarchical techniques",
"motion signal processing"
] | [
"P",
"P",
"P",
"M",
"M"
] |
1&KcAJT | Gait adaptation in a quadruped robot | A newborn foal can learn to walk soon after birth through a process of rapid adaptation acting on its locomotor controller. It is proposed here that this kind of adaptation can be modeled as a distributed system of adaptive modules (AMs) acting on a distributed system of adaptive oscillators called Adaptive Ring Rules (ARRs), augmented with appropriate and simple reflexes. It is shown that such a system can self-program through interaction with the environment. The adaptation emerges spontaneously as several discrete stages: Body twisting, short quick steps, and finally longer, coordinated stepping. This approach is demonstrated on a quadrupedal robot. The result is that the system can learn to walk several minutes after inception. | [
"walking machines",
"central pattern generators",
"neural networks",
"legged locomotion",
"non-linear oscillators",
"biologically inspired robots"
] | [
"M",
"U",
"U",
"U",
"M",
"M"
] |
1mqpRRo | the beam radiance estimate for volumetric photon mapping | We present a new method for efficiently simulating the scattering of light within participating media. Using a theoretical reformulation of volumetric photon mapping, we develop a novel photon gathering technique for participating media. Traditional volumetric photon mapping samples the in-scattered radiance at numerous points along the length of a single ray by performing costly range queries within the photon map. Our technique replaces these multiple point-queries with a single beam-query, which explicitly gathers all photons along the length of an entire ray. These photons are used to estimate the accumulated in-scattered radiance arriving from a particular direction and need to be gathered only once per ray. Our method handles both fixed and adaptive kernels, is faster than regular volumetric photon mapping, and produces images with less noise. | [
"photon map",
"participating media",
"ray marching",
"nearest neighbor",
"variable kernel method",
"light transport",
"rendering",
"photon tracing",
"global illumination"
] | [
"P",
"P",
"M",
"U",
"M",
"M",
"U",
"M",
"U"
] |
-gNTvNy | Extended natural deduction images of conversions from the system of sequents | A modification of natural deduction system for intuitionistic predicate logic, the extension of natural deduction system, will be defined. The main result of such modification will be that each conversion from the set of conversions of a cut-elimination procedure in the system of sequents has the corresponding conversion in the set of conversions of a normalization procedure in that extension of natural deduction. | [
"cut-elimination theorem",
"normalization theorem"
] | [
"M",
"M"
] |
-G2DxwP | The CatecholamineCytokine Balance | Cytokines are involved both in various immune reactions and in controlling certain events in the central nervous system (CNS). In our earlier studies, it was shown that monoamine neurotransmitters, released in stress situations, represent a tonic sympathetic control on cytokine production and on the balance of proinflammatory/anti-inflammatory cytokines. Basic and clinical studies have provided evidence that the biophase level of monoamines, determined by the balance of their release and uptake, is involved in the pathophysiology and treatment of depression, while inflammatory mediators might also have a role in its etiology. In this work, we studied the role of changes in norepinephrine (NE) level on the lipopolysaccharide (LPS) evoked tumor necrosis factor (TNF)-? and interleukin (IL)-10 response both in the plasma and in the hippocampus of mice. We demonstrated that the LPS induced TNF-? response is in direct correlation with the biophase level of NE, as it is significantly higher when the release of NE of vesicular origin was completely inhibited in an animal model of depression (reserpine treatment) and it is significantly lower in the case of increasing biophase levels of NE by genetic (NETKO) or chemical (desipramine) disruption of NE reuptake. IL-10 was changed inversely to TNF-? levels only in the desipramine-treated animals. Our results showed that depression is related both to changes in peripheral and in hippocampal inflammatory cytokine production and to monoamine neurotransmitter levels. Since several anti-inflammatory drugs also have antidepressant effects, we hypothesized that antidepressants are also able to modulate the LPS-induced inflammatory response, which might contribute to their antidepressant effect | [
"cytokine",
"depression",
"catecholamine",
"neuroimmunomodulation",
"inflammation"
] | [
"P",
"P",
"U",
"U",
"U"
] |
1sJ97-h | The complexity of measuring complexity | Purpose - The purpose of this paper is to present a reflection that can contribute to the discussion of the possibility, or not, of measuring the complexity of any given system. Design/methodology/approach - The reflection takes place considering three aspects: the first one, of an etymological character, with the purpose to specify the semantics of complexity. The second, epistemological, refers to the forms of complexity measuring in different domains. And the third, of an ontological character, refers to the essential in the complexity of reality, which contains to the observer who observes and that is observes himself. Findings - It is proposed to reserve the word complex to refer to systems that are treated as an undecomposed and irreducible totality and the act of measuring does not take place; while the expression complicated may be used when the act of reduction takes place by measuring the system. Research limitations/implications - The statement made claims to be revised, criticized, questioned, confronted ..., with the aim to enrich it and accept it, or else to reject it and discard it. Originality/value - There is some degree of originality, with respect to other works, that deals with the measurement of complexity, but which does not refer to the distinction that it is possible to establish between complexity and complication, with its epistemological implications and in particular with the challenge of its quantification. | [
"complexity",
"measurement",
"epistemology",
"systems theory"
] | [
"P",
"P",
"P",
"M"
] |
2Vxskk7 | simulation-based equivalence checking between systemc models at different levels of abstraction | Today for System-on-Chips (SoCs) companies Electronic System Level(ESL) design is the established approach. Abstraction and standardized communication interfaces based on SystemC Transaction Level Modeling (TLM) have become the core component for ESL design. The abstract models in ESL flows are stepwise refined down to hardware. In this context verification is the major bottleneck: After each refinement step the resulting model is simulated again with the same testbench. The simulation results have to be compared to the previous results to check the functional equivalence of both models. For models at lower levels of abstraction strong approaches exist to formally prove equivalence. However, this is not possible here due to the TLM abstraction. Hence, in practice equivalence checking in ESL flows is based on simulation. Since implementing the necessary verification environment requires a huge effort, we propose an equivalence checking framework in this paper. Our framework allows to easily compare variable accesses in different SystemC models. Therefore, the two models are co-simulated using a client-server architecture. In combination with multi-threading our approach is very efficient as shown by the experiments. In addition, the time required for debugging is reduced by the framework since the respective source code references where the variable accesses did not match are presented to the user. | [
"equivalence checking",
"systemc",
"transaction level modeling",
"debugging"
] | [
"P",
"P",
"P",
"P"
] |
2Cao-r4 | Learning interaction protocols by mimicking understanding and reproducing human interactive behavior | Four phases system for learning human interactive behavior is proposed. The system is based on constrained motif discovery for basic action discovery. Controller generation is achieved through a piecewise-linear control generator. Models learned from interacting with multiple people can be combined. A real-world experiment is reported as a proof-of-concept. | [
"hri",
"social robotics",
"guided navigation",
"learning from demonstrations"
] | [
"U",
"U",
"U",
"M"
] |
GZMNESy | Mouse and Keyboard Cursor Warping to Accelerate and Reduce the Effort of Routine HCI Input Tasks | Gaze tracking has been suggested as an alternative to traditional computer pointing mechanisms. However, the accuracy limitations of gaze estimation algorithms and the fatigue imposed on users when overloading the visual perceptual channel with a motor control task have prevented the widespread adoption of gaze as a pointing modality. Rather than using gaze as a complete pointing mechanism, this study investigates the usage of gaze to complement traditional keyboard/mouse cursor positioning methods during standard human-computer interaction (HCI). With this approach, bringing the mouse/keyboard cursor to a target still requires a manual action, but the time and effort involved are substantially reduced in terms of mouse movement amplitude incurred or number of keystrokes pressed. This is accomplished by the cursor warping from its original position on the screen to the estimated point of regard of the user on the screen as estimated by video-oculography gaze tracking when a keystroke or mouse movement event is detected. The user adjusts the final fine-grained positioning of the cursor manually. The results of the user study carried out here on the effects of cursor warping in common computer input operations that involve cursor repositioning when using one or several monitors as well as on its learning dynamics over time show that cursor warping can speed up and/or reduce the physical effort required to complete tasks such as mouse/trackpad target acquisition, keyboard text cursor positioning, mouse/keyboard based text selection, and drag and drop operations. | [
"cursor warping",
"gaze tracking",
"attentive user interface",
"eye tracking",
"gaze aware interfaces",
"human computer interaction (hci)",
"human factors",
"repetitive stress injury (rsi)"
] | [
"P",
"P",
"M",
"M",
"M",
"M",
"U",
"M"
] |
1tJjyc1 | construction of image retrieval systems focused on user knowledge interaction | Our objective was to apply different kinds of database with our proposed graphical search interface, and to verify the effectiveness focused on user knowledge structure in searching because it allowed users to easily modify received information to suitable knowledge. This interface was for multi-faceted metadata named Concentric Ring View. The design concept of this interface is "result-oriented", which means users continue to search by evaluating the retrieved results. We constructed four image retrieval systems with images designed for web pages, flower, insect, and country. We selected databases from two aspects; the gap between images and information features, and the necessity of general knowledge to understand values of information features. To make users more able to understand the relationship between retrieved results and attributes values, we added independent functions depending on the features of database, e.g., preparing images focused on search keys, mapping to meaningful area like a world map. We confirmed that this interface bridged the gaps by materializing user knowledge from abstract images, and that users learned with our system and modified users' knowledge structure without general knowledge. This interface can be used not only as a retrieval system but also as an educational system. | [
"image retrieval systems",
"user knowledge",
"interaction",
"graphical search interface",
"result-oriented"
] | [
"P",
"P",
"P",
"P",
"U"
] |
cWKLLHZ | LeMO: A virtual exhibition of 20th century German history | The aim of the LeMO project is to create a multimedia information system on 20th century German history in the Internet. This work is carried out in a joint project by the Fraunhofer Institute for Software and Systems Engineering (ISST), the German Historical Museum in Berlin and the Haus der Geschichte of the Federal Republic of Germany in Bonn. The LeMO system provides various options for accessing its information. With the need in mind to make cultural content attractive to young people, 3D environments have been developed for each period of 20th century history. These presentations constitute a different way of looking at history. Visitors navigate through 3D spaces to the various museum exhibits and can request further multimedia information on historical events (text, images, audio and video material). Access to specific content is also provided via a metadata-based search engine. The architecture of the LeMO system is based on Internet technologies (including VRML, HTML, streaming audio/video). This paper describes the concepts and implementations used within LeMO to structure and present information. By the end of 1998, 31 3D environments and over 4000 multimedia web pages covering various periods, topics, chronicles and biographies from German history had been developed for the virtual exhibition (www.dhm.de/lemo). From 19971998 LeMO was a project of DFN-Verein (the Association for the Promotion of a German Research Network) with financial support from Deutsche Telekom Berkom. In the LeMO+ follow-up project funded by DFN-Verein, the LeMO system is to be given additional functionality and tested for use in the classroom. | [
"multimedia",
"information system",
"internet",
"culture",
"vrml"
] | [
"P",
"P",
"P",
"P",
"P"
] |
3YxAvfm | Magnitude representation in sequential comparison of two-digit numbers is not holistic either | There is accumulating evidence suggesting that two-digit number magnitude is represented in a decomposed fashion into tens and units rather than holistically as one integrated entity. However, recently, it has been claimed that this property does not hold for the case when two to-be-compared numbers are presented sequentially. In the present study, we pursued this issue in two experiments by evaluating perceptual as well as strategic aspects arising for sequential stimulus presentation in a magnitude comparison task. We observed reliable unit-decade compatibility effects indicating decomposed processing of tens and units in a magnitude comparison task with sequential presentation of the to-be-compared numbers. In particular, we found that both confounding low-level perceptual features and stimulus set characteristics determining cue validity of the units influenced the compatibility effect. Taken together, our results clearly indicate that decomposed representations of tens and units seem to be a general characteristic of multi-digit number magnitude processing, rather than an exception occurring under very specific conditions only. Implications of these results for the understanding of number magnitude representations are discussed. | [
"holistic",
"number magnitude",
"decomposed",
"compatibility effect"
] | [
"P",
"P",
"P",
"P"
] |
-nwFmci | An object-oriented architecture for intelligent virtual receptionist of web sites | Because of the rapid proliferation of Web sites, the question of how a Web site con attract repeat visits has become an important problem. One approach is to treat each visitor to the site as a unique individual, much as a human receptionist would treat each client-customer. This paper proposes on object-oriented architecture for developing virtual receptionists for this purpose. All necessary object classes, their interactions, and the database scheme are defined, and an example is provided. This object-oriented design provides a foundation that allows the virtual agent to be easily modified. | [
"virtual receptionist",
"customized marketing",
"intelligent agents",
"object-oriented approach",
"user profile"
] | [
"P",
"U",
"R",
"R",
"U"
] |
6t11o&K | An integrated framework combining Bio-Hashed minutiae template and PKCS15 compliant card for a better secure management of fingerprint cancelable templates | We address in this paper the problem of privacy in fingerprint biometric systems. Today, cancelable techniques have been proposed to deal with this issue. Ideally, such transforms are one-way. However, even if they are with provable security, they remain vulnerable when the user-specific key that achieves cancelability property is stolen. The prominence of the cancelable template confidentiality to maintain the irreversibility property was also demonstrated for many proposed constructions. To prevent possible coming attacks, it becomes important to securely manage this key as well as the transformed template in order to avoid them being leaked simultaneously and thus compromised. To better manage the user credentials of cancelable constructs, we propose a new solution combining a trusted architecture and a cancelable fingerprint template. Therefore, a Bio-Hashed minutiae template based on a chip matching algorithm is proposed. A pkcs15 compliant cancelable biometric system for fingerprint privacy preserving is implemented on a smartcard. This closed system satisfies the safe management of the sensitive templates. The proposed solution is proved to be well resistant to different attacks. | [
"cancelable template",
"smartcard",
"fingerprint authentication",
"identity management",
"secure biometrics"
] | [
"P",
"P",
"M",
"M",
"R"
] |
7T77cZ2 | LIPS: Link Prediction as a Service for data aggregation applications | A central component of the design of wireless sensor networks is reliable and efficient transmission of data from source to destination. However, this remains a challenging problem because of the dynamic nature of wireless links, such as interference, diffusion, and path fading. When the link quality gets worse, packets will get lost even with retransmissions and acknowledgments when internal queues become full. For example, in a well-known study to monitor volcano behavior (Werner-Allen et al., 2006), the measured data yield of nodes ranges from 20% 20 % to 80% 80 % . To address this challenge brought by unreliable links, in this paper, we propose the idea of LIPS, or Link Prediction as a Service. Specifically, we argue that it is beneficial for applications to be aware of link layer variations, so that they can take into account the future link quality estimates based on past measurements. In particular, we present a novel state space based approach for the link quality prediction, and demonstrate that it is possible to integrate this model into operating system interfaces, so that higher layer data aggregation protocols can directly exploit these interfaces to improve their performance. Our intensive evaluation indicates the state space based approach is accurate, stable and lightweight comparing to other strategies such as the autoregressive model (Liu and Cerpa, 2010). We carry out experiments based on the commonly used sensor node hardware including both link layer and operating system measurements. | [
"link quality prediction",
"state space model",
"prediction error minimization",
"ar model"
] | [
"P",
"R",
"M",
"M"
] |
6879-aH | Winding Stairs: A sampling tool to compute sensitivity indices | Sensitivity analysis aims to ascertain how each model input factor influences the variation in the model output. In performing global sensitivity analysis, we often encounter the problem of selecting the required number of runs in order to estimate the first order and/or the total indices accurately at a reasonable computational cost. The Winding Stairs sampling scheme (Jansen M.J.W., Rossing W.A.H., and Daamen R.A. 1994. In: Gasman J. and van Straten G. (Eds.), Predictability and Nonlinear Modelling in Natural Sciences and Economics. pp. 334-343.) is designed to provide an economic way to compute these indices. The main advantage of it is the multiple use of model evaluations, hence reducing the total number of model evaluations by more than half. The scheme is used in three simulation studies to compare its performance with the classic Sobol' LPtau. Results suggest that the Jansen Winding Stairs method provides better estimates of the Total Sensitivity Indices at small sample sizes. | [
"sensitivity analysis",
"winding stairs sampling scheme",
"total sensitivity indices",
"first order indices",
"sobol' lp tau sequences"
] | [
"P",
"P",
"P",
"R",
"M"
] |
aRxaH4C | Using MATLAB to advance the robotics laboratory | The RV-M I Movernaster from Mitsubishi Electric is an excellent educational robot for students as they learn to program automated tasks and simulate manufacturing processes. Because it was publicly introduced in 1991, the RV-MI utilizes the DOS-based OBASIC computer code as the primary interface language with the robotic drive unit. Today's students, however, often face great frustration when they work with the unfamiliar OBASIC language and the even more unfamiliar DOS operating system. To address these shortcomings, the Mechanical and Manufacturing Engineering Department at Miami University introduced MATLAB as an alternative Window-based interface language to better reflect the current educational experience of students and to increase the functionality of the RV-MI robotic arms. MATLAB successfully overcomes the limitations of the OBASIC/DOS environment and augments the capability of the RV-M I. Examples of the extended capability include the use of graphical user interfaces to facilitate student interaction with the robotic arm and the ability to move the robot along contoured paths. With MATLAB, instructors can develop projects that enhance the student experience with the RV-MI and give them greater insight into robotic applications. (C) 2007 Wiley Periodicals, Inc. | [
"matlab",
"robotics",
"interface language",
"serial communication",
"student survey"
] | [
"P",
"P",
"P",
"U",
"M"
] |
4qT8fAN | the animal algorithm animation tool | In this paper, we present Animal , a new tool for developing animations to be used in lectures. Animal offers a small but powerful set of graphical operators. Animations are generated using a visual editor, by scripting or via API calls. All animations can be edited visually. Animal supports source and pseudo code inclusion and highlighting as well as precise user-defined delays between actions. The paper evaluates the functionality of Animal in comparison to other animation tools. | [
"animation",
"tool",
"tool",
"paper",
"graphics",
"operability",
"visualization",
"script",
"code",
"inclusion",
"precise",
"delay",
"action",
"functional",
"comparisons",
"algorithm-animation",
"user",
"tools"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U",
"U",
"P"
] |
4Cv5z:z | A hybrid heuristic for the traveling salesman problem | The combination of genetic and local search heuristics has been shown to be an effective approach to solving the traveling salesman problem (TSP). This paper describes a new hybrid algorithm that exploits a compact genetic algorithm in order to generate high-quality tours, which are then refined by means of the Lin-Kernighan (LK) local search. Local optima found by the LK local search are in turn exploited by the evolutionary part of the algorithm in order to improve the quality of its simulated population. The results of several experiments conducted on different TSP instances with up to 13509 cities show the efficacy of the symbiosis between the two heuristics. | [
"tsp",
"compact genetic algorithms",
"hybrid ga",
"lin-kernighan algorithm"
] | [
"P",
"P",
"M",
"R"
] |
-Z1cLWv | A decade of research and development on program animation: The Jeliot experience | Jeliot is a program animation system for teaching and learning elementary programming that has been developed over the past decade, building on the Eliot animation system developed several years before. Extensive pedagogical research has been done on various aspects of the use of Jeliot including improvements in learning, effects on attention, and acceptance by teachers. This paper surveys this research and development, and summarizes the experience and the lessons learned. (C) 2011 Elsevier Ltd. All rights reserved. | [
"program animation",
"jeliot",
"attention",
"program visualization",
"software visualization",
"eye tracking",
"conflictive animation",
"phenomenography"
] | [
"P",
"P",
"P",
"M",
"U",
"U",
"M",
"U"
] |
2rUtj:f | Efficient waste management in construction logistics: a refurbishment case study | Large-scaled construction projects with their complex logistical processes of transport, handling and storage material to site, on site and from site bear significant environmental impacts. Such impacts include use of land, production of waste and emissions. In this paper, we investigateby using a case study approachhow a well-planed implemented material management can affect efficiency in construction logistics focusing on logistics of disposal. The motivation behind this research is to examine the ecological and economic impact of construction logistics on waste management on site, when construction logistics is planned and determined in the early planning phase of a refurbishment project. We find that the implementation of a waste management plan can reduce environmental impacts, specifically increasing the efficiency of logistics of disposal by approximately 9%, but it is associated with higher costs. The findings gained from this single case study research lead to case-study-specific recommendations for practitioners and regulators in the construction logistics area. | [
"efficiency",
"waste management",
"construction logistics",
"case study research"
] | [
"P",
"P",
"P",
"P"
] |
-CDTXQx | Model-based stereo-tracking of non-polyhedral objects for automatic disassembly experiments | Automatic disassembly tasks in the engine compartment of a used car constitute a challenge for control of a disassembly robot by machine vision. Experience in exploratory experiments under such conditions forced us to abandon data-driven aggregation of edge elements into straight-line data segments in favor of a direct association of individual edge elements with model segments obtained from scene domain models of tools and workpieces. In addition, we had to switch from a conventional single camera hand-eye configuration to a movable stereoconfiguration mounted on a separate 'observer' robot. A eneralisation of our model-based tracking includes the parameters, which characterize the relative pose of one camera with respect to the other one of the stereo-camera set-up, into the set of parameters to be re-estimated for each new stereo image pair. This results in a continuous re-calibration during a relative movement between stereo-camera set-up and tracked objects. Our approach had to be extended further in order to cope with non-polyhedral objects. The methodological improvements of machine vision in the course of this research are treated in detail. We discuss, moreover, the systematic trading-off of computational resources for increased robustness which is vital for visual control of automatic disassembly robots. | [
"non-polyhedral objects",
"disassembly",
"machine vision",
"model-based tracking",
"visual servoing",
"image sequence evaluation",
"stereo-vision",
"kalman-filtering"
] | [
"P",
"P",
"P",
"P",
"M",
"M",
"U",
"U"
] |
VFPQhEY | Using non-slicing topological representations for analog placement | Layout design for analog circuits has historically been a time consuming, error-prone, manual task. Its complexity results not so much from the number of devices, as from the complex interactions among devices or with the operating environment, and also from continuous-valued performance specifications. This paper addresses the problem of device-level placement for analog layout in a non-traditional way. Different from the classic approaches-exploring a huge search space with a combinatorial optimization technique, where the cells are represented by means of absolute coordinates, being allowed to illegally overlap during their moves in the chip plane-this paper advocates the use of non-slicing topological representations, like (symmetric-feasible) sequence-pairs, ordered- and binary- trees. Extensive tests, processing industrial analog designs, have shown that using skillfully the symmetry constraints (very typical to analog circuits) to remodel the solution space of the encoding systems, the topological representation techniques can achieve a better computation speed than the traditional approaches, while obtaining a similar high quality of the designs. | [
"topological representations",
"analog placement",
"sequence-pairs",
"ordered trees",
"binary trees"
] | [
"P",
"P",
"U",
"M",
"M"
] |