id
stringlengths 7
7
| title
stringlengths 3
578
| abstract
stringlengths 0
16.7k
| keyphrases
sequence | prmu
sequence |
---|---|---|---|---|
-j9:p21 | Computing with random quantum dot repulsion | Quantum dot cellular automata (QCA) show great promise for fast computation with larger integration density and lower power consumption. Unfortunately, previous research has shown that QCA are likely to be extremely sensitive to placement error. During an investigation into placement sensitivity, it was discovered that completely random quantum dot structures have the ability to compute simple binary functions. In this paper, we further explore the random structures in an idealized way, looking for higher-order functions; an example of one-bit full adder is shown in the paper. Moreover, a new structure, the semi-random structure, is introduced to alleviate some, but not all, difficulties in connecting disparate random structures; the difficulties arise from the fact that inputs and outputs to and from a purely random structure may not reside at the edges of the structure. In the semi-random structure, the inputs and outputs are localized to the edges. It is demonstrated that semi-random structures, like random structures, can almost assuredly compute simple Boolean functions. | [
"quantum dot cellular automata",
"random structure",
"semi-random structure",
"simulated annealing"
] | [
"P",
"P",
"P",
"U"
] |
4J:EhxK | Polarization controlled tunable multiwavelength SOA-fiber laser based on few-mode polarization maintaining fiber loop mirror | A tunable multiwavelength fiber ring laser based on a semiconductor optical amplifier and few-mode polarization maintaining fiber (FM-PMF) loop mirror is proposed and experimentally demonstrated. It is accomplished by simply adjusting a polarization controller (PC) next to the FM-PMF in the loop arm for the polarization-dependent mode excitation in the FM-PMF and the resulting effective refractive index change. In addition, the undesirable dependence of the transmission loss of the FM-PMF loop mirror on the rotation angle of the PC in the loop arm can be reduced by using a fiber coupler with unequal coupling ratio instead of equal coupling ratio. Stable and tunable multiwavelength operation with up to 11 lasing lines at about 1.5nm interval is achieved with the power deviation less than 2dB and optical signal to noise extinction ratio over 35dB. | [
"polarization maintaining fiber",
"fiber loop mirror",
"fiber lasers",
"few-mode fiber"
] | [
"P",
"P",
"R",
"R"
] |
1FJoUvp | Lattice networks: Capacity limits, optimal routing, and queueing behavior | Lattice networks are widely used in regular settings like grid computing, distributed control, satellite constellations, and sensor networks. Thus, limits on capacity, optimal routing policies, and performance with finite buffers are key issues and are addressed in this paper. In particular, we study the routing aigorithms that achieve the maximum rate per node for infinite and finite buffers in the nodes and different communication models, namely uniform communications, central data gathering and border data gathering. In the case of nodes with infinite buffers, we determine the capacity of the network and we characterize the set of optimal routing algorithms that achieve capacity. In the case of nodes with finite buffers, we approximate the queue network problem and obtain the distribution on the queue size at the nodes. This distribution allows us to study the effect of routing on the queue distribution and derive the algorithms that achieve the maximum rate. | [
"lattice networks",
"routing",
"uniform communication",
"data gathering",
"border data gathering",
"network capacity",
"queueing theory",
"square grid",
"torus"
] | [
"P",
"P",
"P",
"P",
"P",
"R",
"M",
"M",
"U"
] |
37c-KGh | TaCN growth with PDMAT and H2/Ar plasma by plasma enhanced atomic layer deposition | TaCN films were deposited using atomic layer deposition (ALD) using PDMAT and H2/Ar plasma. Calculations based on density functional theory (DFT) indicate a high energy barrier and a low reaction energy for reducing the +5 Ta oxidation state in the PDMAT precursor by using pure H radicals. Through the assistance of Ar radicals, low resistivity of TaCN films of 230??cm could be deposited by using H2/Ar plasma. By employing in situ X-ray diffraction during annealing, the activation energy for Cu diffusion through the TaCN barrier was evaluated at 1.6eV. | [
"tacn",
"atomic layer deposition",
"reaction pathway",
"diffusion activation energy"
] | [
"P",
"P",
"M",
"R"
] |
16qQuja | Common volatility and correlation clustering in asset returns ? | We present a new multivariate framework for the estimation and forecasting of the evolution of financial asset conditional correlations. Our approach assumes return innovations with time dependent covariances. A Cholesky decomposition of the asset covariance matrix, with elements written as sines and cosines of spherical coordinates allows for modelling conditional variances and correlations and guarantees its positive definiteness at each time t. As in Christodoulakis and Satchell [Christodoulakis, G.A., Satchell, S.E., 2002. Correlated ARCH (CorrARCH): Modelling the time-varying conditional correlation between financial asset returns. European Journal of Operational Research 139 (2), 350369] correlation is generated by conditionally autoregressive processes, thus allowing for an autocorrelation structure for correlation. Our approach allows for explicit out-of-sample forecasting and is consistent with stylized facts as time-varying correlations and correlation clustering, co-movement between correlation coefficients, correlation and volatility as well as between volatility processes (co-volatility). The latter two are shown to depend on correlation and volatility persistence. Empirical evidence on a trivariate model using monthly data from Dow Jones Industrial, Nasdaq Composite and the 3-month US Treasury Bill yield supports our theoretical arguments. | [
"common volatility",
"correlation",
"persistence",
"predictability",
"time variation"
] | [
"P",
"P",
"P",
"U",
"M"
] |
-gryHrr | a fixpoint calculus for local and global program flows | We define a new fixpoint modal logic, the visibly pushdown ?-calculus (VP-?), as an extension of the modal ?-calculus. The models of this logic are execution trees of structured programs where the procedure calls and returns are made visible. This new logic can express pushdown specifications on the model that its classical counterpart cannot, and is motivated by recent work on visibly pushdown languages [4]. We show that our logic naturally captures several interesting program specifications in program verification and dataflow analysis. This includes a variety of program specifications such as computing combinations of local and global program flows, pre/post conditions of procedures, security properties involving the context stack, and interprocedural dataflow analysis properties. The logic can capture flow-sensitive and inter-procedural analysis, and it has constructs that allow skipping procedure calls so that local flows in a procedure can also be tracked. The logic generalizes the semantics of the modal ?-calculus by considering summaries instead of nodes as first-class objects, with appropriate constructs for concatenating summaries, and naturally captures the way in which pushdown models are model-checked. The main result of the paper is that the model-checking problem for VP-? is effectively solvable against pushdown models with no more effort than that required for weaker logics such as CTL. We also investigate the expressive power of the logic VP-?: we show that it encompasses all properties expressed by a corresponding pushdown temporal logic on linear structures ( caret [2]) as well as by the classical ?-calculus. This makes VP-? the most expressive known program logic for which algorithmic software model checking is feasible. In fact, the decidability of most known program logics (?-calculus, temporal logics LTL and CTL, caret , etc.) can be understood by their interpretation in the monadic second-order logic over trees. This is not true for the logic VP-?, making it a new powerful tractable program logic. | [
"logic",
"-calculus",
"specification",
"verification",
"model-checking",
"games",
"infinite-state",
"μ",
"pushdown systems"
] | [
"P",
"P",
"P",
"P",
"P",
"U",
"U",
"U",
"M"
] |
4yASuFQ | An Efficient Distributed Weather Data Sharing System Based on Agent | This paper presents a multi-agent based framework for managing, sharing, and accessing weather data in a geographical distributed environment. There are two tiers in this framework including national central centre and local centre. In each node, the services for querying and accessing datasets based on agent environment are designed, which includes data resource agent, local management system, and metadata system. Information retrieval can be conduced either locally or distributed, by querying local weather data or exploiting global metadata respectively. With a variety of advantages, this proposed platform is designed to provide a useful platform for research on weather data sharing system in a national area. | [
"distributed",
"weather data",
"sharing",
"agent-based",
"data management"
] | [
"P",
"P",
"P",
"U",
"R"
] |
23BS8mt | Object-oriented BeNet programming for data-focused bottom-up design of autonomous agents | This paper proposes a programming method for developing autonomous agents that behave intelligently in unpredictable environments, by extending their descriptions to realize larger behavioral repertoires. In our programming method, an agent is described as a concurrent program which updates a finite set of data objects in real time at regular intervals in an object-oriented programming language so that its description can be modular and reusable. The description of an agent specifies its property independent of hardware platforms. The proposed method is suitable for developing an agent difficult to develop based on a top-down approach alone. (C) 1999 Published by Elsevier Science B.V. All right reserved. | [
"bottom-up design",
"autonomous agent",
"programming method",
"modular description",
"real-time computing"
] | [
"P",
"P",
"P",
"R",
"U"
] |
5&-Zouj | METEX A flexible tool for air trajectory calculation ? | Air trajectories are often used to study airflow pattern and sourcereceptor relation in environmental research. We developed the METeorological data EXplorer (METEX) for trajectory calculation with an emphasis on flexibility and ease-of-use. | [
"air trajectory",
"meteorology",
"environmental science",
"web service",
"soap"
] | [
"P",
"P",
"M",
"U",
"U"
] |
3&EsNpA | characterizing locality, evolution, and life span of accesses in enterprise media server workloads | The main issue we address in this paper is the workload analysis of today's enterprise media servers. This analysis aims to establish a set of properties specific for enterprise media server workloads and to compare them with well known related observations about web server workloads. We propose two new metrics to characterize the dynamics and evolution of the accesses, and the rate of change in the site access pattern, and illustrate them with the analysis of two different enterprise media server workloads collected over a significant period of time. Another goal of our workload analysis study is to develop a media server log analysis tool, called MediaMetrics , that produces a media server traffic access profile and its system resource usage in a way useful to service providers. | [
"locality",
"evolution",
"access",
"enterprise",
"media",
"media servers",
"server",
"workload",
"addressing",
"paper",
"workload analysis",
"analysis",
"observability",
"web server",
"metrication",
"dynamics",
"change",
"pattern",
"timing",
"log analysis",
"tool",
"traffic",
"profiles",
"systems",
"resource",
"temporal locality",
"static locality",
"cdns",
"sharing patterns"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"M",
"M",
"U",
"M"
] |
1cBdKA9 | the role of tags and image aesthetics in social image search | In recent years, there has been a proliferation of consumer digital photographs taken and stored in both personal and online repositories. As the amount of user-generated digital photos increases, there is a growing need for efficient ways to search for relevant images to be shared with friends and family. Text-query based search approaches rely heavily on the similarity between the input textual query and the tags added by users to the digital content. Unfortunately, text-query based search results might include a large number of relevant photos, all of them containing very similar tags, but with varying levels of image quality and aesthetic appeal. In this paper we introduce an image re-ranking algorithm that takes into account the aesthetic appeal of the images retrieved by a consumer image sharing site search engine (Google's Picasa Web Album). In order to do so, we extend a state-of-the-art image aesthetic appeal algorithm by incorporating a set of features aimed at consumer photographs. The results of a controlled user study with 37 participants reveal that image aesthetics play a varying role on the selected images depending on the query type and on the user preferences. | [
"image aesthetics",
"re-ranking of search results",
"consumer image search"
] | [
"P",
"R",
"R"
] |
rgpVWUb | tracking continuous topological changes of complex moving regions | A moving region whose location and extend change over time can imply topological changes such as region split and hole formation . To study this phenomenon is useful in many applications, e.g. the topology control of wireless sensor networks and emergency handling. It is challenging to detect the topological changes of a moving region since we lack the ability to capture its continuous change of shapes all the time. Moreover, for a complex moving region containing multiple components, it is hard to determine which component before the change corresponds to which component after the change. In this paper, we propose a model to determine topological changes of a complex moving region through snapshots called observations . We introduce a two-phase strategy that the first phase partitions the observations into several evaluation units and uniquely maps a unit before the change to exactly one unit after the change; the second phase interprets the topological change by integrating all basic topological changes from evaluation units. | [
"topological change",
"moving object",
"complex region"
] | [
"P",
"M",
"R"
] |
2VuyRVL | Three-Dimensional Polyhedra Can Be Described by Three Polynomial Inequalities | Bosse et al. conjectured that for every natural number d >= 2 and every d-dimensional polytope P in R(d), there exist d polynomials p(1)( x), ..., p(d)(x) satisfying P = {x is an element of R(d) : p(1)( x) >= 0, ..., p(d)( x) >= 0}. We show that every three-dimensional polyhedron can be described by three polynomial inequalities, which confirms the conjecture for the case d = 3 but also provides an analogous statement for the case of unbounded polyhedra. The proof of our result is constructive. | [
"polynomial",
"polytope",
"lojasiewicz's inequality",
"semi-algebraic set",
"theorem of brocker and scheiderer"
] | [
"P",
"P",
"M",
"U",
"M"
] |
33drYNS | Learning a multi-dimensional companding function for lossy source coding | Although the importance of lossy source coding has been growing, the general and practical methodology for its design has not been completely resolved. The well-known vector quantization (VQ) can represent any fixed-length lossy source coding, but requires too much computation resource. Companding vector quantization (CVQ) can reduce the complexity of non-structured VQ by replacing vector quantization with a set of scalar quantizations and can represent a wide class of practically useful VQs. Although an analytical derivation of optimal CVQ is difficult except for very limited cases, optimization using data samples can be performed instead. Here we propose a CVQ optimization method, which includes bit allocation by a newly derived distortion formula as a generalization of Bennetts formula, and test its validity. We applied the method to transform coding and compared the performance of our CVQ with those of KarhunenLove transformation (KLT)-based coding and non-structured VQ. As a consequence, we found that our trained CVQ outperforms not only KLT-based coding but also non-structured VQ in the case of high bit-rate coding of linear mixtures of uniform sources. We also found that trained CVQ even outperformed KLT-based coding in the low bit-rate coding of a Gaussian source. To highlight the advantages of our approach, we also discuss the degradation of non-structured VQ and the limitations of theoretical analyses which are valid for high bit-rate coding. | [
"companding function",
"lossy source coding",
"vector quantization",
"bit allocation"
] | [
"P",
"P",
"P",
"P"
] |
3-LLPUf | Minimal projective reconstruction for combinations of points and lines in three views | In this article we address the problem of projective reconstruction of structure and motion given only image data. In particular we investigate three novel minimal combinations of points and lines over three views, and give complete solutions and reconstruction methods for two of these cases: four points and three lines in three views, and two points and six lines in three views. We show that in general there are three and seven solutions, respectively, to these cases. The reconstruction methods are tested on real and simulated data. We also give tentative results for the case of nine lines in correspondence over three views, where experiments indicate that there may be up to 36 complex solutions. | [
"projective reconstruction",
"points and lines",
"conics"
] | [
"P",
"P",
"U"
] |
-&UMu&C | Anisotropic hyperelastic modeling for face-centered cubic and diamond cubic structures | A new hyperelastic model for a crystal structure with face-centered cubic or diamond cubic system is proposed. The proposed model can be simply embedded into a nonlinear finite element analysis framework and does not require information of the crystal structure. The hyperelastic constitutive relation of the model is expressed as a polynomial-based strain energy density function. Nine strain invariants of the crystal structure are directly used as polynomial bases of the model. The hyperelastic material constants, which are the coefficients of the polynomials, are determined through a numerical simulation using the least square method. In the simulation, the CauchyBorn rule and interatomic potentials are utilized to calculate reference data under various deformation conditions. As the fitting result, the hyperelastic material constants for silicon, germanium, and six transition metals (Ni, Pd, Pt, Cu, Ag, and Au) are provided. Furthermore, numerical examples are performed using the proposed hyperelastic model. | [
"hyperelastic model",
"diamond cubic",
"strain invariants",
"cauchyborn rule",
"face centered cubic",
"finite element method"
] | [
"P",
"P",
"P",
"P",
"M",
"R"
] |
4RRfsQo | A B-Tree index extension to enhance response time and the life cycle of flash memory | Flash memory has critical drawbacks such as long latency of its write operation and a short life cycle. In order to overcome these limitations, the number of write operations to flash memory devices needs to be minimized. The B-Tree index structure, which is a popular hard disk based index structure, requires an excessive number of write operations when updating it to flash memory. To address this, it was proposed that another layer that emulates a B-Tree be placed between the flash memory and B-Tree indexes. This approach succeeded in reducing the write operation count, but it greatly increased search time and main memory usage. This paper proposes a B-Tree index extension that reduces both the write count and search time with limited main memory usage. First, we designed a buffer that accumulates update requests per leaf node and then simultaneously processes the update requests of the leaf node carrying the largest number of requests. Second, a type of header information was written on each leaf node. Finally, we made the index automatically control each leaf node size. Through experiments, the proposed index structure resulted in a significantly lower write count and a greatly decreased search time with less main memory usage, than placing a layer that emulates a B-Tree. | [
"b-tree index extension",
"flash memory life cycle",
"flash memory response time",
"write count reducing",
"bftl",
"mb-tree"
] | [
"P",
"R",
"R",
"R",
"U",
"U"
] |
3sXcMCs | A predictive bandwidth management scheme and network architecture for real-time VER traffic | Many bandwidth management schemes for ATM network require detail traffic description and accurate traffic model. These informations, however, are not always available, especially in the case of real-time VER video traffic. Predictive bandwidth management scheme solves this problem by using the on-line traffic measurement-it predicts the future traffic rate from the measurement and allocates the bandwidth accordingly. In this article, we firstly introduce an adaptive wavelet predictor for dynamic bandwidth allocation. Our simulation results show that, compared with the time-domain least-mean-square (LMS) predictor, our wavelet predictor improves the prediction accuracy and significantly reduces the cell-loss-rate when used in the dynamic bandwidth allocation. We secondly present a prediction-based bandwidth allocation scheme and the corresponding network architecture. In particular, a Bandwidth Allocation Unit (BAU) at the network access node is suggested. (C) 1999 Elsevier Science B.V, All rights reserved. | [
"multimedia communication",
"atm traffic control device"
] | [
"U",
"M"
] |
3zK2e9s | General lower bound on the size of \((H;k)\)-stable graphs | A graph \(G\) is called \((H;k)\)-vertex stable if \(G\) contains a subgraph isomorphic to \(H\) ever after removing any \(k\) of its vertices. By stab\((H;k)\) we denote the minimum size among the sizes of all \((H;k)\)-vertex stable graphs. In this paper we present a first (non-trivial) general lower bound for stab\((H;k)\) with regard to the order, connectivity and minimum degree of \(H\). This bound is nearly sharp for \(k=1\). | [
"connectivity",
"minimum degree",
"vertex-stable graph"
] | [
"P",
"P",
"M"
] |
54waqsw | Convergence theorems for two finite families of asymptotically nonexpansive mappings | The purpose of this paper is to study the strong and weak convergence of a finite step iteration process with errors for two finite families of asymptotically nonexpansive mappings in uniformly convex Banach spaces. The results presented in the paper improve and extend some results in Sitthikul and Saejung (2009) [9]. (C) 2011 Elsevier Ltd. All rights reserved. | [
"asymptotically nonexpansive mapping",
"uniformly convex banach space",
"common fixed point",
"opial's condition",
"kadec-klee property"
] | [
"P",
"P",
"U",
"U",
"U"
] |
3hsArzD | Relationship between Platelet Imidazoline Receptor-Binding Peptides and Candidate Imidazoline-1 Receptor, IRAS | A candidate human imidazoline-1 receptor, designated imidazoline receptor antisera-selected (IRAS) protein, was cloned based on immunoreactivity with antiserum against a purified imidazoline receptor binding peptide (IRBP antiserum). Human IRAS is 167 kD in size, different from 33- to 85-kD IRBP bands previously linked to the human platelet I1 receptor. To explore the possible relationship between IRAS and these smaller proteins, seven different epitope-specific antisera against IRAS were raised in rabbits for comparison with IRBP antiserum. Focus was on antiserum227241, corresponding to amino acids #227 to 241 in IRAS, because this antiserum was found uniquely able to immunoprecipitate non-denatured 85-kD and 170-kD forms of IRAS from a human megakaryoblastoma cell line (MEG01), a model of platelet-producing cells. Human platelets lacked the 170-kD form of IRAS, but 33-kD and 85-kD bands were detectable and seemed to be possible fragments of full-length IRAS. The intensity of the 85-kD band detected by antiserum227241 was significantly correlated (r 5 0.62, P 5 0.04) with the intensity of the 33-kD band across 11 human platelet samples. A positive correlation between the intensities of the 33-kD and 85-kD bands is consistent with both being fragments of IRAS. | [
"platelets",
"imidazoline receptors",
"iras",
"antiserum",
"nischarin"
] | [
"P",
"P",
"P",
"P",
"U"
] |
-EVXjHC | an asic design for a high speed implementation of the hash function sha-256 (384, 512) | An implementation of the hash functions SHA-256, 384 and 512 is presented, obtaining a high clock rate through a reduction of the critical path length, both in the Expander and in the Compressor of the hash scheme. The critical path is shown to be the smallest achievable. Synthesis results show that the new scheme can reach a clock rate well exceeding 1 GHz using a 0.13um technology. | [
"hash function",
"secure hash standard",
"data authentication"
] | [
"P",
"M",
"U"
] |
-&oVv7i | A note on Yoshidas optimal stopping model for option pricing | We argue that the optimal stopping model which has been used by Yoshida to discuss option pricing can many times lead to an overoptimistic evaluation of payoffs (put option prices). This effect is due to the method used to compare fuzzy payoffs, using a Sugeno integral. It is shown that each fuzzy payoff can be associated to an indifferent non-fuzzy payoff which is never smaller than the highest value having membership 1. Several examples are given in which this property seems to be incovenient. We also show that this inconvenience cannot be avoided by replacing Sugeno integrals by any other integral-like functional like a t-seminormed integral or a Choquet integral. Finally we suggest to use the CamposGonzlez ranking criterion instead. | [
"pricing",
"fuzzy set",
"optimal stopping time",
"fuzzy random variable"
] | [
"P",
"M",
"R",
"M"
] |
2P2-1xC | Centralized-decentralized optimization for refinery scheduling | This paper presents a novel decomposition strategy for solving large scale refinery scheduling problems Instead of formulating one huge and unsolvable MILP or MINLP for centralized problem we propose a, general decomposition scheme that generates smaller sub-systems that can be solved to global optimality The original problem is decomposed at intermediate storage tanks such that inlet and outlet streams of the tank belong to the different sub-systems. Following the decomposition, each decentralized problem is solved to optimality and the solution to the original problem is obtained by integrating the optimal schedule of each sub-systems Different case studies of refinery scheduling are presented to illustrate the applicability and effectiveness of the proposed decentralized strategy. The conditions under which these two types of optimization strategies (centralized and decentralized) give the same optimal result are discussed. (C) 2009 Elsevier Ltd. All rights reserved. | [
"centralized",
"decentralized",
"optimization",
"refinery",
"scheduling"
] | [
"P",
"P",
"P",
"P",
"P"
] |
2qW&xQf | Admission control schemes for proportional differentiated services enabled internet servers using machine learning techniques ? | A widely existing problem in contemporary web servers is the unpredictability of response time. Owing to long response delay, revenues of the enterprises are substantially reduced due to many aborted e-commerce transactions. Recently, researchers have been addressing different admission control schemes of differentiated service for web servers to complement the Internet differentiated services model and thereby provide QoS support to the users of the World Wide Web. However, most of these admission control mechanisms do not guarantee the QoS requirements of all admitted clients under bursty workload. Although an Internet service model called proportional differentiated service is enabled in web servers to improve the QoS guarantee predicament in the literature, it still exists some impracticable assumptions and incompatible problems with the current Internet protocols. In this paper, we propose two algorithms for admission control and traffic scheduler schemes of the web server under proportional differentiated service, wherein a time series predictor is embedded to estimate the traffic load of the client in the next measurement time period. Support vector regression and particle swarm optimization techniques are used to implement the time series predictor based on the reports of successful prediction in the literature. The experimental results reveal that the proposed schemes can realize proportional delay differentiation service in multiclass Web server effectively. Meanwhile, the small computation overhead of particle swarm optimization verifies the feasibility of this machine learning technique in the real-time applications such as the admission control of the Internet server as illustrated in this work. | [
"admission control",
"proportional differentiated service",
"support vector regression",
"particle swarm optimization",
"quality of service",
"fuzzy logic",
"self-similarity"
] | [
"P",
"P",
"P",
"P",
"M",
"U",
"U"
] |
2WP2ivq | Circular reference attributed grammars - their evaluation and applications | This paper presents a combination of Reference Attributed Grammars (RAGs) and Circular Attribute Grammars (CAGs). While RAGs allow the direct and easy specification of nonlocally dependent information, CAGs allow iterative fixed-point computations to be expressed directly using recursive (circular) equations. We demonstrate how the combined formalism, Circular Reference Attributed Grammars (CRAGs), can take advantage of both these strengths, making it possible to express solutions to many problems in an easy way. We exemplify with the specification and computation of the nullable, first, and follow sets used in parser construction, a problem which is highly recursive and normally programmed by hand using an iterative algorithm. We also present a general demand-driven evaluation algorithm for CRAGs and some optimizations of it. The approach has been implemented and experimental results include computations on a series of grammars including that of Java 1.2. We also revisit some of the classical examples of CAGs and show how their solutions are facilitated by CRAGs. (c) 2007 Elsevier B.V. All rights reserved. | [
"reference attributes",
"attribute grammars",
"demand-driven evaluation",
"circular attribute evaluation",
"fixed-point evaluation",
"grammar flow",
"live analysis"
] | [
"P",
"P",
"P",
"R",
"R",
"M",
"U"
] |
4:22BLM | Hybrid approach to efficient text extraction in complex color images | Texture-based methods and connected component (CC) methods have been widely used for text localization. However, these two primary methods have their own strength and weakness. This paper proposes a hybrid approach of the two methods for text localization in complex images. An automatically constructed MLP-based texture classifier can increase the recall rates for complex images with much less user intervention and no explicit feature extraction. The CC-based filtering based on the geometry and shape information enhances the precision rates without affecting overall performance. Then, the time-consuming texture analysis for less relevant pixels is avoided by using CAMShift. Our experimentation shows that the proposed hybrid approach leads to not only robust but also efficient text localization. | [
"texture",
"connected component",
"text localization",
"camshift",
"content-based image indexing",
"multi-layer perceptron (mlp)",
"xy recursive cut",
"mean shift"
] | [
"P",
"P",
"P",
"P",
"M",
"M",
"U",
"U"
] |
L&nMm4B | A co-training framework for searching XML documents | In this paper, we study the use of XML tagged keywords (or simply key-tags) to search an XML fragment in a collection of XML documents. We present techniques that are able to employ users evaluations as feedback and then to generate an adaptive ranked list of XML fragments as the search results. First, we extend the vector space model as a basis to search XML fragments. The model examines the relevance between the imposed key-tags and identified fragments in XML documents, and determines the ranked result as an output. Second, in order to deal with the diversified nature of XML documents, we present four XML Rankers (XRs), which have different strengths in terms of similarity, granularity, and ranking features. The XRs are specially tailored to diversified XML documents. We then evaluate the XML search effectiveness and quality for each tailored XR and propose a meta-XML ranker (MXR) comprising the four XRs. The MXR is trained via a machine learning training scheme, which we term the ranking support vector machine (RSVM) in a co-training framework (RSCF). The RSCF takes as input two sets of labelled fragments and feature vectors and then generates as output adaptive rankers in a learning process. We show empirically that, with only a small set of training XML fragments, the RSCF is able to improve after a few iterations in the learning process. Finally, we demonstrate that the RSCF-based MXR is able to bring out the strengths of the underlying XRs in order to adapt the users perspectives on the returned search results. By using a set of key-tag queries on a variety of XML documents, we show that the precision of the result of the RSCF-based MXR is effective. | [
"co-training",
"xml search query",
"key-tag searching",
"ranking xml",
"meta xml ranker"
] | [
"P",
"R",
"R",
"R",
"M"
] |
4auA-8b | a parallel dynamic programming algorithm on a multi-core architecture | Dynamic programming is an efficient technique to solve combinatorial search and optimization problem. There have been many parallel dynamic programming algorithms. The purpose of this paper is to study a family of dynamic programming algorithm where data dependence appear between non-consecutive stages, in other words, the data dependence is non-uniform. This kind of dynnamic programming is typically called nonserial polyadic dynamic programming . Owing to the non-uniform data dependence, it is harder to optimize this problem for parallelism and locality on parallel architectures. In this paper, we address the chanllenge of exploiting fine grain parallelism and locality of nonserial polyadic dynamic programming on a multi-core architecture. We present a programming and execution model for multi-core architectures with memory hierarchy. In the framework of the new model, the parallelism and locality benifit from a data dependence transformation. We propose a parallel pipelined algorithm for filling the dynamic programming matrix by decomposing the computation operators. The new parallel algorithm tolerates the memory access latency using multi-thread and is easily improved with tile technique. We formulate and analytically solve the optimization problem determing the tile size that minimizes the total execution time. The experiments on a simulator give a validation of the proposed model and show that the fine grain parallel algorithm achieves sub-linear speedup and that a potential high scalability on multi-core arichitecture. | [
"dynamic programming",
"multi-core",
"data dependence",
"memory hierarchy",
"scalabilitiy"
] | [
"P",
"P",
"P",
"P",
"U"
] |
3wSGir3 | Visual simulation of heat shimmering and mirage | We provide a physically-based framework for simulating the natural phenomena related to heat interaction between objects and the surrounding air. We introduce a heat transfer model between the heat source objects and the ambient flow environment, which includes conduction, convection, and radiation. The heat distribution of the objects is represented by a novel temperature texture. We simulate the thermal flow dynamics that models the air flow interacting with the heat by a hybrid thermal lattice Boltzmann model (HTLBM). The computational approach couples a multiple-relaxation-time LBM (MRTLBM) with a finite difference discretization of a standard advection-diffusion equation for temperature. In heat shimmering and mirage, the changes in the index of refraction of the surrounding air are attributed to temperature variation. A nonlinear ray tracing method is used for rendering. Interactive performance is achieved by accelerating the computation of both the MRTLBM and the heat transfer, as well as the rendering on contemporary graphics hardware (GPU). | [
"heat shimmering",
"mirage",
"heat transfer",
"thermal flow dynamics",
"lattice boltzmann model",
"nonlinear ray tracing",
"gpu acceleration"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"R"
] |
2sbRLsJ | Gait Deviations Induced by Visual Motion Stimulation in Roll Depend on Head Orientation | Locomotion control uses proprioceptive, visual, and vestibular signals. Previously, we analyzed the visual contribution with visual motion stimulation in roll while participants kept their heads in a normal upright orientation. In this study we applied the same visual disturbance in a head-upright and a nose-down condition. Random dot patterns were constantly rotated in roll at 15/sec on a computer-driven binocular head-mounted display, while the participants walked a distance of 6 m. The stimulation effect was more pronounced in the nose-down condition. These results are similar to the results of previous galvanic vestibular stimulation (GVS) studies, suggesting that in terms of the direction of action visual motion stimulation in the roll plane is similar to GVS. | [
"gait deviation",
"head orientation",
"gvs",
"visual stimulation in roll"
] | [
"P",
"P",
"P",
"R"
] |
F8Gf2Bw | Abdominal Aortic Aneurysm Is a Specific Antigen-Driven T Cell Disease | To determine whether monoclonal/oligoclonal T cells are present in abdominal aortic aneurysm (AAA) lesions, we amplified ?-chain T cell receptor (TCR) transcripts from these lesions by the nonpalindromic adaptor (NPA)-polymerase chain reaction (PCR)/V-?-specific PCR followed by cloning and sequencing. Sequence analysis revealed the presence of substantial proportions of identical ?-chain TCR transcripts in AAA lesions in 9 of 10 patients examined, strongly suggesting the presence of oligoclonal populations of ?? TCR+ T cells. We have also shown the presence of oligoclonal populations of ?? TCR+ T cells in AAA lesions. Sequence analysis after appropriate PCR amplification and cloning revealed the presence of substantial proportions of identical V?I and V?II TCR transcripts in 15 of 15 patients examined, and of V?1 and V?2 TCR transcripts in 12 of 12 patients. These clonal expansions were very strong. All these clonal expansions were statistically significant by the binomial distribution. In other studies, we determined that mononuclear cells infiltrating AAA lesions express early- (CD69), intermediate- (CD25, CD38), and late- (CD45RO, HLA class II) activation antigens. These findings suggest that active ongoing inflammation is present in the aortic wall of patients with AAA. These results demonstrate that oligoclonal ?? TCR+ and ?? TCR+T cells are present in AAA lesions. These oligoclonal T cells have been clonally expanded in vivo in response to yet unidentified antigens. Although the antigenic specificity of these T cells remains to be determined, these T cells may play a significant role in the initiation and/or the propagation of the AAA. It appears that AAA is a specific antigen-driven T cell disease | [
"abdominal aortic aneurysm",
"oligoclonal t cells",
"intermediate-",
"t cell mononuclear infiltrates",
"alpha/beta t cell receptor",
"gamma/delta t cell receptor",
"clonal expansions of t cells",
"expression of early-",
"and late-activation antigens"
] | [
"P",
"P",
"P",
"R",
"M",
"M",
"R",
"R",
"M"
] |
2kMM:pX | Supporting browsing-specific information needs: Introducing the Citation-Sensitive In-Browser Summariser | Practitioners and researchers need to stay up-to-date with the latest advances in their fields, but the continual growth in the amount of literature available makes this task increasingly difficult. In this article, we describe the Citation-Sensitive In-Browser Summariser (CSIBS), a new research tool to help manage the literature browsing task. The design of CSIBS was based on a user requirements analysis which identified the information needs that biomedical researchers commonly encounter when browsing through academic literature. CSIBS supports researchers in their browsing tasks by presenting both a generic and a tailored preview about a citation at the point at which they encounter it. This information is aimed at helping the reader determine whether or not to invest the time in exploring the cited article further, thus alleviating information overload. Feedback from biomedical researchers indicates that CSIBS facilitates this relevance judgement task, and that the interface and previews are informative and easy to use. | [
"text summarisation",
"user-centric design",
"document browsing aids",
"contextual summaries"
] | [
"M",
"M",
"M",
"U"
] |
3yxK6yW | On the Limits of Resolution and Visual Angle in Visualization | This article describes a perceptual level-of-detail approach for visualizing data. Properties of a dataset that cannot be resolved in the current display environment need not be shown, for example, when too few pixels are used to render a data element, or when the element's subtended visual angle falls below the acuity limits of our visual system. To identify these situations, we asked: (1) What type of information can a human user perceive in a particular display environment? (2) Can we design visualizations that control what they represent relative to these limits? and (3) Is it possible to dynamically update a visualization as the display environment changes, to continue to effectively utilize our perceptual abilities? To answer these questions, we conducted controlled experiments that identified the pixel resolution and subtended visual angle needed to distinguish different values of luminance, hue, size, and orientation. This information is summarized in a perceptual display hierarchy, a formalization describing how many pixels-resolution-and how much physical area on a viewer's retina-visual angle-is required for an element's visual properties to be readily seen. We demonstrate our theoretical results by visualizing historical climatology data from the International Panel for Climate Change. | [
"resolution",
"visual angle",
"visualization",
"luminance",
"hue",
"size",
"orientation",
"experimentation",
"human factors",
"visual acuity",
"visual perception"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U",
"M",
"R",
"M"
] |
1r1Mj8N | Virtual Prototyping of an Advanced Leveling Light System Using a Virtual Reality-Based Night Drive Simulator | This paper proposes to use a virtual reality-based night driving simulator as a tool to evaluate an advanced leveling light system. The night driving simulator visualizes the complex beam patterns of automotive headlights in high detail, while the vehicle motion directly affects the lighting direction of the headlights. The system is connected to the control algorithm of an advanced leveling light system, which controls the headlight tilting angle. Within the virtual prototyping process of the lighting system, good combinations of control parameter values can be identified, based on virtual test drives, and the number of real test drives can be reduced significantly. [DOI: 10.1115/1.3428734] | [
"virtual prototyping",
"night drive simulation",
"virtual reality",
"real-time visualization",
"dynamic leveling lights",
"aiming distance",
"glare"
] | [
"P",
"P",
"M",
"M",
"M",
"U",
"U"
] |
-sL-SnP | Nonconforming finite element analysis for Poisson eigenvalue problem | The main aim of this paper is to study a nonconforming quadrilateral finite element (named modified quasi-Wilson element) approximation to Poisson eigenvalue problem. Firstly, by employing a special property of this element (when u u ? ? H3(?) H 3 ( ? ) , the consistency error is of order O(h2) O ( h 2 ) which is one order higher than its interpolation error O(h) O ( h ) ) and the interpolation postprocessing technique, the superclose and superconvergence results of order O(h2) O ( h 2 ) for the exact solution of eigenvector u u in broken H1 H 1 -norm are deduced on generalized rectangular meshes and rectangular meshes, respectively. Secondly, it is proved that the consistency error even can reach O(h4) O ( h 4 ) order for arbitrary quadrilateral meshes when u u ? ? H5(?) H 5 ( ? ) . This is a new astonishing feature which has never been discovered. Subsequently, based on the above characteristic and some asymptotic expansions of the conforming bilinear finite element, the extrapolation solution of order O(h4) O ( h 4 ) for eigenvalue is derived. Finally, some numerical results are provided to verify the theoretical analysis. | [
"eigenvalue problem",
"modified quasi-wilson element",
"superclose and superconvergence",
"extrapolation"
] | [
"P",
"P",
"P",
"P"
] |
3ZtBv5N | Integrated system enteroperability testing with applications to VoIP | This work has been motivated by the need to test interoperability of systems carrying voice calls over the IP network. The voice over IP (VoIP) systems must be integrated and interoperate with the existing public switched telephone network (PSTN) before they are widely adopted. Standards have been developed to address this problem, but unfortunately different standards bodies and commercial consortiums have defined different standards. Furthermore, the prevailing VoIP standard such as H.323 is incomplete, complex, and presents the implementers with "vendors latitudes." As a result, there is no guarantee that the integrated VoIP systems would interoperate properly even if the implementations are all H.323-compliant. Thus interoperability testing has become indispensable. We want to test all the system interoperations by exercising all the required patterns of "interoperating behaviors." On the other hand, test execution in real environment is expensive, and we want to minimize the number of tests while maintaining the coverage. We present a general method for automatic generation of test cases, which cover all the required system interoperations and contain a minimal number of tests. We study data structures and efficient test generation algorithms, which take time proportional to the total test case size. Finally, we report experimental results on voIP systems. | [
"integrated system",
"voip",
"interoperability testing",
"coverage",
"redundancy"
] | [
"P",
"P",
"P",
"P",
"U"
] |
27ymSFu | encapsulation of parallelism in the volcano query processing system | Volcano is a new dataflow query processing system we have developed for database systems research and education. The uniform interface between operators makes Volcano extensible by new operators. All operators are designed and coded as if they were meant for a single-process system only. When attempting to parallelize Volcano, we had to choose between two models of parallelization, called here the bracket and operator models. We describe the reasons for not choosing the bracket model, introduce the novel operator model, and provide details of Volcano's exchange operator that parallelizes all other operators. It allows intra-operator parallelism on partitioned datasets and both vertical and horizontal inter-operator parallelism. The exchange operator encapsulates all parallelism issues and therefore makes implementation of parallel database algorithms significantly easier and more robust. Included in this encapsulation is the translation between demand-driven dataflow within processes and data-driven dataflow between processes. Since the interface between Volcano operators is similar to the one used in real, commercial systems, the techniques described here can be used to parallelize other query processing engines. | [
"encapsulation",
"parallel",
"query processing",
"process",
"systems",
"dataflow",
"database system",
"database",
"research",
"education",
"interfaces",
"operability",
"extensibility",
"model",
"reasoning",
"implementation",
"algorithm",
"robust",
"translation",
"data"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U"
] |
zKBzLhF | Hybrid GMDH-type modeling for nonlinear systems: Synergism to intelligent identification | This paper presents a novel hybrid GMDH-type algorithm which combines neural networks (NNs) with an approximation scheme (self-organizing polynomial neural network: SOPNN). This composite structure is developed to establish a new heuristic approximation method for identification of nonlinear static systems. NNs have been widely employed to process modeling and control because of their approximation capabilities. And SOPNN is an analysis technique for identifying nonlinear relationships between the inputs and outputs of such systems and builds hierarchical polynomial regressions of required complexity. Therefore, the combined model can harmonize NNs with SOPNN and find a workable synergistic environment. Simulation results of the nonlinear static system are provided to show that the proposed method is much more accurate than other modeling methods. Thus, it can be considered for efficient system identification methodology. | [
"hybrid gmdh-type algorithm",
"neural networks",
"sopnn",
"heuristic approximation",
"system identification"
] | [
"P",
"P",
"P",
"P",
"P"
] |
2VKXGmd | Focusing on content reusability and interoperability in a personalized hypermedia assessment tool | This paper presents the development of a modularized hypermedia testing tool, called iAdaptTest, based entirely on e-learning specifications and discusses how this architecture improves the reusability and the interoperability of the learning data. All the categories of data,that istopics, user profiles, testing data, adaptive rules, and testing results are coded in XML format complying with Topic Maps, IMS LIP and IMS QTI. The data are stored in distinct files and can be independently shared across different educational applications. The paper concludes with an evaluation study concerning the creation of formative and summative assessments for adult seminars. Through focused interviews, the participants of the study identified the ability to share information and the multi-criteria adaptation options as the most important features of the system. Further, in the second phase of the evaluation the files produced were shared with other educational applications and thus it was verified that the learning data could be imported and rendered correctly. | [
"reusability",
"interoperability",
"assessment",
"xml",
"adaptive hypermedia",
"educational technology",
"personalisation",
"multimedia applications"
] | [
"P",
"P",
"P",
"P",
"R",
"M",
"U",
"M"
] |
4PEzpan | Understanding consumer heterogeneity: A business intelligence application of neural networks | This paper describes a business intelligence application of neural networks in analyzing consumer heterogeneity in the context of eating-out behavior in Taiwan. We apply a neural network rule extraction algorithm which automatically groups the consumers into identifiable segments according to their socio-demographic information. Within each of these segments, the consumers are distinguished between those who eat-out frequently from those who do not based on their psychological traits and eat-out considerations. The data set for this study has been collected through a survey of 800 Taiwanese consumers. Demographic information such as gender, age and income were recorded. In addition, information about their psychological traits and eating-out considerations that might influence the frequency of eating-out were obtained. The results of our data analysis show that the neural network rule extraction algorithm is able to find distinct consumer segments and predict the consumers within each segment with good accuracy. | [
"business intelligence",
"neural network",
"rule extraction",
"decision tree",
"eating-out prediction"
] | [
"P",
"P",
"P",
"U",
"R"
] |
3G:XMPN | WTrack: HMM-based walk pattern recognition and indoor pedestrian tracking using phone inertial sensors | Indoor tracking systems have become very popular, wherein pedestrian movement is analyzed in a variety of commercial and secure spaces. The inertial sensor-based method makes great contributions to continuous and seamless indoor pedestrian tracking. However, such a system is vulnerable to the cumulative locating errors when moving distance increases. Inaccurate heading values caused by the interference of body swing of natural walking and the geomagnetic disturbances are the main sources of the accumulative errors. To reduce such errors, additional infrastructure or highly accurate sensors have been used by previous works that considerably raise the complexity of the architecture. This paper presents an indoor pedestrian tracking system called WTrack, using only geomagnetic sensors and acceleration sensors that are commonly carried by smartphones. A fine-grained walk pattern of indoor pedestrians is modeled through Hidden Markov Model. With this model, WTrack can track indoor pedestrians by continuously recognizing the pre-defined pedestrians walk pattern. More importantly, WTrack is able to resist both the interference of body swing of natural walking and the geomagnetic disturbances of nearby objects. Our experimental results reveal that the location error is <2m, which is considered adequate for indoor location-based-service applications. The adaptive sample rate adjustment mode further reduces the energy consumption by 52% in comparison, as opposed to the constant sampling mode. | [
"walk pattern recognition",
"indoor pedestrian tracking",
"smartphone",
"inertial sensing",
"ubiquitous computing"
] | [
"P",
"P",
"P",
"M",
"U"
] |
3TZupn1 | Larger than Life's Invariant Measures | Larger than Life (LtL) is a four-parameter family of two-dimensional cellular automata that generalizes John Horton Conway's celebrated Game of Life (Life) to large neighborhoods and general birth and survival thresholds If T is an LtL rule, and A a random configuration, then T'(A) denotes the state of the system at time P starting from A. T' (A) may be thought of as a Markov process since the sites update independently from all preceding times except the current one. The Markov process is degenerate since the transitions are deterministic Nevertheless, it has a compact state space, so there exists a measure mu that is invariant under the rule. Since the dynamics are translation invariant, mu can be chosen so In this paper, we prove that there are upper bounds, sometimes sharp, on the density of such measures. We also prove that there are upper bounds on the densities of LtL's life measures, which are fixed points for given rules. Calculating these bounds requires a large neighborhood combinatorial calculation, which is done only for certain cases. The remaining cases arc left as open problems. | [
"larger than life",
"invariant measures",
"cellular automata",
"game of life"
] | [
"P",
"P",
"P",
"P"
] |
3uygNM- | A DSL for specifying run-time adaptations for embedded systems: an application to vehicle stereo navigation | The traditional approach for specifying adaptive behavior in embedded applications requires developers to engage in error-prone programming tasks. This results in long design cycles and in the inherent inability to explore and evaluate a wide variety of alternative adaptation behaviors, critical for systems exposed to dynamic operational and situational environments. In this paper, we introduce a domain-specific language(DSL) for specifying and implementing run-time adaptable application behavior. We illustrate our approach using a real-life stereo navigation application as a case study, highlighting the impact and benefits of dynamically adapting algorithm parameters. The experiments reveal our approach effective, as such run-time adaptations are easily specified in a higher level by the DSL, and thus at a lower programming effort than when using a general-purpose language such as C. | [
"run-time adaptations",
"embedded systems",
"stereo navigation",
"adaptable behavior",
"domain-specific languages"
] | [
"P",
"P",
"P",
"P",
"R"
] |
55QeSKg | On the existence, uniqueness and topological structure of solution sets to a certain fractional differential equation | The main goal of this paper is to prove two Aronszajn type theorems for some initial value problems formulated in terms of fractional derivatives. Moreover, we are going to establish a theorem on the existence and uniqueness of positive solutions. | [
"fractional differential equation",
"initial value problem",
"existence and uniqueness of solutions",
"r? r ? set r? r ? r? r ? r ?",
"riemannliouville integral"
] | [
"P",
"P",
"R",
"M",
"U"
] |
3n2mkPQ | A home environment test battery for status assessment in patients with advanced Parkinson's disease | A test battery for assessing patient state in advanced Parkinson's disease, consisting of self-assessments and motor tests, was constructed and implemented on a hand computer with touch screen in a telemedicine setting. The aim of this work was to construct an assessment device, applicable during motor fluctuations in the patient's home environment. Selection of self-assessment questions was based on questions from an e-diary, previously used in a clinical trial. Both un-cued and cued tapping tests and spiral drawing tests were designed for capturing upper limb stiffnes, slowness and involuntary movements. The patient interface gave an audible signal at scheduled response times and was locked otherwise. Data messages in an XML-format were sent from the hand unit to a central server for storage, processing and presentation. In tapping tests, speed and accuracy were calculated and in spiral tests, standard deviation of frequency filtered radial drawing velocity was calculated. An overall test score, combining repeated assessments of the different test items during a test period, was defined based on principal component analysis and linear regression. An evaluation with two pilot patients before and after receiving new types of treatments was performed. Compliance and usability was assessed in a clinical trial (65 patients with advanced Parkinson's disease) and correlations between different test items and internal consistency were investigated. The test battery could detect treatment effect in the two pilot patients, both in self-assessments, tapping tests results and spiral scores. It had good patient compliance and acceptable usability according to nine nurses. Correlation analysis showed that tapping results provided different information as compared to diary responses. Internal consistency of the test battery was good and learning effects in the tapping tests were small. | [
"home environment",
"test battery",
"parkinson's disease",
"self-assessment",
"motor test",
"telemedicine",
"motor fluctuations",
"tapping",
"spiral drawing",
"electronic diary",
"movement disorder"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"M",
"M"
] |
raagnws | inferring private information using social network data | On-line social networks, such as Facebook, are increasingly utilized by many users. These networks allow people to publish details about themselves and connect to their friends. Some of the information revealed inside these networks is private and it is possible that corporations could use learning algorithms on the released data to predict undisclosed private information. In this paper, we explore how to launch inference attacks using released social networking data to predict undisclosed private information about individuals. We then explore the effectiveness of possible sanitization techniques that can be used to combat such inference attacks under different scenarios. | [
"inference",
"informal",
"use",
"social networks",
"network",
"data",
"facebook",
"users",
"publish",
"connection",
"learning",
"algorithm",
"predict",
"paper",
"exploration",
"attack",
"effect",
"scenario",
"privacy"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U"
] |
3gbdV9s | Decluster: a complex network model-based data center network topology | To cope with increasing demands of computation and storage, data centers should follow the pace of the rapid growth of data size. It is necessary for a data center with a scalability property of which each expansion of a data center network is done with a few modifications. Besides the scalability property, we also need a data center to have good performance, such as high throughput. For these purposes, we propose Decluster, a complex network model-based data center network topology. The complex network model of Decluster is derived from a random network. Such a model just satisfies the requirement of scalability. Decluster employs a complex network model to achieve high throughput via reducing the variance of local clustering coefficients. We have carried out extensive simulations to demonstrate that Decluster enjoys good performance while keeping scalability. | [
"complex network",
"data center network topology",
"local clustering coefficient"
] | [
"P",
"P",
"P"
] |
27ip1w& | Secure administration of cryptographic role-based access control for large-scale cloud storage systems | A new cryptographic administrative RBAC model AdC-RBAC for cloud data storage. Administrative tasks only allowed to be performed by authorised roles. A new Role-based Encryption (RBE) Scheme that works with the AdC-RBAC model. Enforcement of Role based access policies for secure data storage in the cloud. Protection of data security in large-scale cloud systems. | [
"administration",
"role-based access control",
"data storage",
"role-based encryption",
"cryptographic rbac",
"cloud computing"
] | [
"P",
"P",
"P",
"P",
"R",
"M"
] |
3pDiBC7 | tour into the picture using relative depth calculation | Tour into the picture (TIP) proposed by Horry et al. is an approach that easily makes animation from one 2D image of a scene. TIP provides a simple 3D scene model, which is only the combination of some billboards and a 3D polygon frame, to generate gratifying high-quality 3D animation. As the key technique of TIP, spidery mesh that bases on the vanishing points in the image provides enough information to do the reconstruction. But the normative relative depth calculating of a single image has not been of attention. In this paper, we propose an approach of relative depth calculating, including viewpoint detection, relative depth calculation (multiple vanishing points), and relative size calculation to get smooth switch between TIP model and the reference image. Without complex calculating, the TIP model can be automatically constructed with respect to the vanishing point(s) specified by the user. We also show that our method of combining TIP with panorama can be applied in representation of cultural heritages practicably and naturally. | [
"tour into the picture (tip)",
"panorama",
"walk-through",
"virtual reality (vr)",
"projective geometry",
"relative depth calculation (rdc)",
"image-based rendering (ibr)"
] | [
"P",
"P",
"U",
"M",
"U",
"M",
"M"
] |
3gb-oZJ | High Performance Overlay File Distribution by Integrating Resource Discovery and Service Scheduling | Many existing studies in overlay networking have focused on resource discovery or server selection. This work studies the integrated performance of peer-to-peer (P2P) file sharing systems using location-aware resource discovery and service scheduling schemes. A novel file capacity amplification (FCA) model is first presented to capture the problem of the file distribution problem. Then two novel service scheduling schemes and protocols, Capacity Amplification (CA) and its variant CA with Penetration (CAP). are presented to enhance the performance of file distribution in overlay networks. The CA scheme represents a greedy approach in selecting clients for services disregarding the effect of delay latency on transport capacity. The CAP scheme, oil the other hand, adopts the semantics of small world networks to reduce effectively delay latency among peer servers and clients. Consequently, the effective transport capacity can be increased efficiently leading to fast file distribution. The analytical results indicate that traditional scheduling schemes such as FCFS perform poorly compared with CA and CAP schemes. Furthermore, a high-performance P2P file distribution system needs both in efficient resource recovery scheme and a good service scheduling scheme. | [
"file distribution",
"peer-to-peer",
"p2p",
"capacity amplification",
"penetration",
"small world network"
] | [
"P",
"P",
"P",
"P",
"P",
"P"
] |
-HA7gy- | A meshfree method based on radial basis functions for the eigenvalues of transient Stokes equations | In this paper, a meshfree method based on radial basis functions (RBFs) is developed to approximate the eigenvalues of Stokes equations in primitive variables in a square domain. To avoid the inaccuracy near the boundaries, the collocation on boundary technique is applied. This approach leads to more accurate solutions in comparisons with finite element methods. To investigate the role of shape parameter in approximation, some discussion on shape parameter is presented. | [
"rbf",
"collocation on boundary technique",
"shape parameter",
"stokes eigenvalue problem",
"collocation method"
] | [
"P",
"P",
"P",
"M",
"R"
] |
1Hs6ogN | three steps to views: extending the object-oriented paradigm | At the core of any sophisticated software development and maintenance environment is a large mass of complex data. The data (the central data of the environment) is composed of smaller sets of data that can be related in complicated and often subtle ways. The user or developer of the environment will be more effective if they are able to deal with conceptual slices, or views , of the large, complex structure. This paper presents an architectural building block for object-based software environments based on the views concept. The building block allows the construction of global abstractions that describe unified behavior of large sets of objects. The basis of the architecture relies on extending the object-oriented paradigm in three steps: (1) defining multiple interfaces in object classes; (2) controlling visibility of instance variables; and (3) allowing multiple copies of an instance variable to occur within an object instance. This paper focuses on the technical aspects of the views approach. | [
"views",
"object",
"object-oriented",
"core",
"software development",
"software",
"developer",
"maintenance",
"environments",
"complexity",
"data",
"user",
"effect",
"concept",
"slice",
"structure",
"paper",
"architecture",
"building block",
"global",
"abstraction",
"behavior",
"interfaces",
"control",
"visibility",
"variability",
"aspect"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
-wZ9AJV | Ergonomic design of beverage can lift tabs based on numerical evaluations of fingertip discomfort | This paper introduces finite element analyses to evaluate numerically and objectively the feelings in the fingertip when opening aluminum beverage cans, in order to design the shape of the tab. Experiments of indenting vertically the fingertip pulp by a probe and by tabs of aluminum beverage can ends have allowed us to observe force responses and feelings in the fingertip. It was found that a typical forcedisplacement curve may be simplified as a combination of three curves with different gradients. Participants feel a touch at Curve 1 of the forcedisplacement curve, then feel a pressure and their pulse at Curve 2, finally feel discomfort followed by a pain in the fingertip at Curve 3. Finite element analyses have been performed to simulate indenting the tab with the fingertip vertically to confirm that the simulation results agree well with the experimental observations. Finally, numerical simulations of the finger pulling up the tab of the can end has also been performed and discomfort in the fingertip has been related to the maximum value of the contact stress of the finger model. Comparisons of three designs of tab ring shape showed that the tab with a larger contact area with finger is better. | [
"finite element analyses",
"aluminum beverage cans",
"human fingertip indentation experiments",
"numerical evaluation of pain"
] | [
"P",
"P",
"M",
"R"
] |
2Df7Nts | Threshold jumping and wrap-around scan techniques toward efficient tag identification in high density RFID systems | With the emergence of wireless RFID technologies, the problem of Anti-Collision has been arousing attention and instigated researchers to propose different heuristic algorithms for advancing RFID systems operated in more efficient manner. However, there still have challenges on enhancing the system throughput and stability due to the underlying technologies had faced different limitation in system performance when network density is high. In this paper, we present a Threshold Jumping (TJ) and a Wrap-Around Scan (WAS) techniques, which are query tree based approaches, aiming to coordinate simultaneous communications in high density RFID environments, to speedup tag identification, to increase the overall read rate and to improve system throughput in large-scale RFID systems. The main idea of the Threshold Jumping is to limit the number of collisions. When the number of collisions exceeds a predefined threshold, it reveals that tag density in RF field is too high. To avoid unnecessary enquiry messages, the prefix matching will be moved to next level of the query tree, alleviating the collision problems. The method of setting frequency bound indeed improves the efficiency in high density and randomly deployed RFID systems. However, in irregular or imbalanced RFID networks, inefficient situation may happen. The problem is that the prefix matching is performed in single direction level-order scheme, which may cause an imbalance query tree on which the right sub-tree always not been examined if the identification process goes to next level before scan the right sub-tree due to threshold jumping. By scanning the query tree from right to left in alternative levels, i.e., wrap-around, this flaw cold be ameliorated. To evaluate the performance of proposed techniques, we have implemented the TJ and the WAS method along with the query tree protocol. The simulation results show that the proposed techniques provide superior performance in high density environments. It is shown that the TJ and WAS are effective in terms of increasing system throughput and minimizing identification delay. | [
"threshold jumping",
"wrap-around scan",
"query tree",
"tag anti-collision"
] | [
"P",
"P",
"P",
"R"
] |
-N61MBL | Evaluation of the effect of SSL overhead in the performance of e-business servers operating in B2B scenarios | In the current business to business environments, transactions between e-business servers must be carried out with a high security level. To carry out secure transactions, the servers must do additional tasks, such us exchanging encryption keys and encrypting and decrypting the information interchanged during the transactions. The combination of several specific algorithms for these tasks constitutes a cipher suite. The additional tasks degrade the server performance and the challenge is to quantify the degradation as a function of the cipher suite selected. Until now, several research works have evaluated the impact of security on the performance provided by web servers using static and very simple dynamic contents. However there is a lack of research into the impact of security on the performance of e-business servers which execute complex transactions, some of them involving additional transactions with other servers. This work presents an evaluation of the impact of using SSL, with several representative configurations, on the performance of e-business servers. The business application used to carry out this execution is the TPC-App benchmark, which is a good representation of business-to-business environments. The benchmark runs on a cluster of two layers. The results of this evaluation are unexpected, because the impact of SSL on performance is small compared to the results of previous works that evaluate web servers, for which the impact of SSL on performance is very high. Therefore, this work provides insight to solve the tradeoff between security and performance when an SSL cipher suite must be selected for a complex e-business system rather than for a simple web server. (C) 2007 Elsevier B.V. All rights reserved. | [
"ssl overhead",
"tpc-app benchmark",
"e-business server performance",
"b2b environments",
"impact of security on performance"
] | [
"P",
"P",
"R",
"R",
"R"
] |
419CKaf | Parametric Lagrangian dual for the binary quadratic programming problem | Based on a difference between convex decomposition of the Lagrangian function, we propose and study a family of parametric Lagrangian dual for the binary quadratic program. Then we show they improve several lower bounds in recent literature. | [
"lagrangian dual",
"binary quadratic program",
"lower bound",
"semidefinite programming",
"d.c. decomposition",
"90c09",
"90c10",
"90c20"
] | [
"P",
"P",
"P",
"M",
"M",
"U",
"U",
"U"
] |
4jFv&s& | An exact method for the two-echelon, single-source, capacitated facility location problem | Facility location problems form an important class of integer programming problems, with applications in the telecommunication, distribution and transportation industries. In this paper we are concerned with a particular type of facility location problem in which there exist two echelons of facilities. Each facility in the second echelon has limited capacity and can be supplied by only one facility in the first echelon. Each customer is serviced by only one facility in the second echelon. The number and location of facilities in both echelons together with the allocation of customers to the second-echelon facilities are to be determined simultaneously. We propose a Lagrangian relaxation-based branch and bound algorithm for its solution. We present numerical results for a large suite of test problems of realistic and practical size. These indicate that the method is efficient. It provides smaller branch and bound trees and requires less CPU time as compared to LP-based branch and bound obtained from a 01 integer package. | [
"facilities",
"integer programming",
"branch and bound",
"optimization"
] | [
"P",
"P",
"P",
"U"
] |
YhkzkHN | Dead-reckoning sensor system and tracking algorithm for 3-D pipeline mapping | A dead-reckoning sensor system and a tracking algorithm for 3-D pipeline mapping are proposed for a tap water pipeline for which the diameter is small and the inner surface is rough due to pipe scales The goals of this study ale to overcome the performance limitations of small and low-grade sensors by combining various sensors with complementary functions and achieve robustness against a severe environment A dead-reckoning sensor system consists of a small, low-cost micro electromechanical system inertial measurement unit (MEMS IMU) and an optical navigation sensor (used in laser mice). A tracking algorithm consists of a multi-rate extended Kalman filter (EKE) to fuse redundant and complementary data from the MEMS IMU and the optical navigation sensor and a geometry compensation method to reduce position estimation error using the end point of the pipeline. Two sets of experimental data have been obtained by driving a radio-controlled car equipped with the sensor system in a 3-D pipeline and on asphalt pavement. Our study can be used to estimate the path of a 3-D pipeline or mobile robots (C) 2009 Elsevier Ltd. All rights reserved | [
"3-d pipeline mapping",
"optical navigation sensor",
"extended kalman filter",
"dead reckoning",
"multi-sensor fusion"
] | [
"P",
"P",
"P",
"U",
"U"
] |
1x-oieA | Fast algorithms for high-order sparse linear prediction with applications to speech processing | Propose fast algorithms for sparse linear prediction. Usage of NlogN N log N algorithms for repeated solve of symmetric Toeplitz systems. Can handle even quite large dimensions and high-sampling rate. The fast algorithms shows possibilities for implementation in real-time systems. Experiments shows that high and low accuracy solutions performs almost equally well. | [
"sparse linear prediction",
"speech and audio processing",
"linear programming",
"real-time optimization",
"speech reconstruction",
"packet loss concealment"
] | [
"P",
"M",
"M",
"M",
"M",
"U"
] |
R2ufs&6 | Reflective Pervasive Systems | Pervasive adaptive systems are concerned with the construction of "smart" technologies capable of adapting to the needs of the individual in real time. In order to achieve this level of specificity, systems must be capable of monitoring the psychological status of the user and responding to these changes in real time and across multiple systems if necessary. This article describes a number of conceptual issues associated with this category of adaptive technology. The biocybernetic loop describes different approaches to monitoring the status of the user from physiological sensors to overt behavior. These data are used to drive real time system adaptation tailored to a specific user in a particular context. The rate at which the technology adapts to the individual user are described over three different phases of usage: awareness (short-term), adjustment (medium-term), and coevolution (long-term). An ontology is then proposed for the development of an adaptive software architecture that embodies this approach and may be extended to encompass several distinct loops working in parallel. The feasibility of the approach is assessed through implemented case studies of their performance and functionality. | [
"biocybernetic loop",
"ontology",
"design",
"human factors",
"languages",
"physiological computing",
"middleware"
] | [
"P",
"P",
"U",
"U",
"U",
"M",
"U"
] |
4BknrSB | CFD simulation for pedestrian wind comfort and wind safety in urban areas: General decision framework and case study for the Eindhoven University campus | Wind comfort and wind safety for pedestrians are important requirements in urban areas. Many city authorities request studies of pedestrian wind comfort and wind safety for new buildings and new urban areas. These studies involve combining statistical meteorological data, aerodynamic information and criteria for wind comfort and wind safety. Detailed aerodynamic information can be obtained using Computational Fluid Dynamics (CFD), which offers considerable advantages compared to wind tunnel testing. However, the accuracy and reliability of CFD simulations can easily be compromised. For this reason, several sets of best practice guidelines have been developed in the past decades. Based on these guidelines, this paper presents a general simulation and decision framework for the evaluation of pedestrian wind comfort and wind safety in urban areas with CFD. As a case study, pedestrian wind comfort and safety at the campus of Eindhoven University of Technology are analysed. The turbulent wind flow pattern over the campus terrain is obtained by solving the 3D steady Reynolds-averaged NavierStokes equations with the realisable k? model on an extensive high-resolution grid based on grid-convergence analysis. The simulation results are compared with long-term and short-term on-site wind speed measurements. Wind comfort and wind safety are assessed and potential design improvements are evaluated. The framework and the case study are intended to support and guide future studies of wind comfort and wind safety with CFD and, this way, to contribute to improved wind environmental quality in urban areas. | [
"computational fluid dynamics (cfd)",
"guidelines",
"wind flow",
"building aerodynamics",
"discomfort and danger",
"built environment",
"experimental validation"
] | [
"P",
"P",
"P",
"R",
"M",
"U",
"U"
] |
12wRmsc | the application of error-sensitive testing strategies to debugging | Program errors can be considered from two perspectivescause and effect. The goal of program testing is to detect errors by discovering their effects, while the goal of debugging is to search for the associated cause. In this paper, explore ways in which some of the results of testing research can be applied to the debugging process. In particular, computation testing and domain testing, which are two error-sensitive test data selection strategies, are described. Ways in which these selection strategies can be used as debugging aids are then discussed. | [
"applications",
"errors",
"test",
"strategies",
"debugging",
"effect",
"program testing",
"search",
"association",
"paper",
"exploration",
"research",
"process",
"computation",
"data",
"select",
"sensitive"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U"
] |
1nDiHBt | a channel access scheme for large dense packet radio networks | Prior work in the field of packet radio networks has often assumed a simple success-if-exclusive model of successful reception. This simple model is insufficient to model interference in large dense packet radio networks accurately. In this paper we present a model that more closely approximates communication theory and the underlying physics of radio communication. Using this model we present a decentralized channel access scheme for scalable packet radio networks that is free of packet loss due to collisions and that at each hop requires no per-packet transmissions other than the single transmission used to convey the packet to the next-hop station. We also show that with a modest fraction of the radio spectrum, pessimistic assumptions about propagation resulting in maximum-possible self-interference, and an optimistic view of future signal processing capabilities that a self-organizing packet radio network may scale to millions of stations within a metro area with raw per-station rates in the hundreds of megabits per second. | [
"access",
"scheme",
"radio network",
"model",
"interference",
"paper",
"communication",
"theory",
"physical",
"decentralization",
"scalability",
"collision",
"propagation",
"future",
"signal processing",
"capabilities",
"self-organization",
"scale",
"packet-loss"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U"
] |
56T:z7n | EvolvingSpace: A Data Centric Framework for Integrating Bioinformatics Applications | The paper presents EvolvingSpace, a data centric distributed system, which is intended to address the data and application integration problem in bioinformatics data centers. The system employs commodity PCs for data storage and computation. EvolvingSpace manages data in a decentralized manner, which is convenient for storing data annotations and can eliminate potential data-access bottlenecks. It indexes distributed data in multilevels to facilitate the construction of complex workflows that consist of applications running on different types of data. In addition, the paper proposes a data locality and workflow aware scheduling algorithm (ES-Scheduling) to balance the data distribution and computing performance as well as throughput and workflow response time. We run extensive experiments using the system with real bioinformatics applications. Our results show that the system is efficient for running integrated bioinformatics applications and has good scalability. | [
"bioinformatics",
"distributed systems",
"scheduling",
"data sharing",
"workflow management",
"data models"
] | [
"P",
"P",
"P",
"M",
"R",
"M"
] |
1:ecmAJ | Software engineering for web services workflow systems | Service-oriented computing (SOC) suggests that many open, network-accessible services will be available over the Internet for organizations to incorporate into their own processes. Developing new software systems by composing an organization's local services and externally-available web services is conceptually different from system development supported by traditional software engineering lifecycles. Consumer organizations typically have no control over the quality and/or consistency of the external services that they incorporate, thus top-down software development lifecycles are impractical. Software architects and designers will require agile, lightweight processes to evaluate tradeoffs in system design based on the ''estimated'' responsiveness of external services coupled with known performance of local services. We introduce a model-driven software engineering approach for designing systems (i.e. workflows of web services) under these circumstances and a corresponding simulation-based evaluation tool. | [
"web service evaluation",
"workflow management system",
"software engineering and modelling"
] | [
"R",
"M",
"M"
] |
4Feq9HQ | Consumer trust, perceived security and privacy policy - Three basic elements of loyalty to a web site | Purpose - The purpose of this paper is to analyze the effect of privacy and perceived security on the level of trust shown by the consumer in the internet. It also aims to reveal and test the close relationship between the trust in a web site and the degree of loyalty to it. Design/methodology/approach - First, there is an explanation of the main attributes of the concepts examined, with special attention being paid to the multi-dimensional nature of the variables and the relationships between them. This is followed by an examination of the validation processes of the measuring instruments. Findings - Specifically, the study reveals that an individual's loyalty to a web site is closely linked to the levels of trust. Thus, the development of trust not only affects the intention to buy, as shown by previous researchers, but it also directly affects the effective purchasing behavior, in terms of preference, cost and frequency of visits, and therefore, the level of profitability provided by each consumer. In addition, the analyses show that trust in the internet is particularly influenced by the security perceived by consumers regarding the handling of their private data. Practical implications - The results of this study provide several managerial implications for companies in this sector. Suggestions are offered for national and international organizations involved in regulating these markets. Originality/value - The results of this research remedy, to a certain extent, the scarcity of empirical studies that have designed and validated measuring scales for the concepts of privacy, security, trust and loyalty to the internet, as well as testing the relationships between them. | [
"trust",
"privacy",
"internet",
"data security",
"customer loyalty"
] | [
"P",
"P",
"P",
"R",
"M"
] |
3HrjpM6 | Locating online loan applicants for an insurance company | Purpose - This study aims to investigate insurance policy loan applicant characteristics. Additionally, it reveals the behaviour patterns of heavy users who have applied for at least two loans. A policy loan prediction model is established which is designed to increase loan application rates. Design/methodology/approach - The proposed model is implemented using data-mining techniques and comprises two mechanisms: a business rule generator and a recommendation mechanism. Two analytical approaches, the C.5 and Apriori algorithm, are employed to analyse the profile and browsing log DBs of insured individuals. The prediction model is verified by actual data from a Taiwanese insurance company. Findings - The data-mining results reveal that five attributes are ultimately used to establish the prediction model, namely: gender, marketing channel, insurance type, area of policy owner, and assumed interest rate. Additionally, the analytical results also indicate that insured individuals apply for loans as a result of arbitrage inducement. The accuracy of loan applicant prediction can exceed 70 per cent. Finally, some interesting patterns emerge for heavy users, such as the finding that loan applicants are used to applying for loans continuously (loan application repetition is on average two to three times). Research limitations/implications - Some policy owners who are unfamiliar with the web interface prefer to contact insurance personnel directly to discuss their insurance needs, and thus no browsing records are available for such users. In such cases only the profile could be collected and analysed. Practical implications - The proposed model enables insurance firms to locate potential loan applicants according to the data-mining results. As in the illustration scenario in the paper, insurance personnel can contact these potential loan applicants before they submit loan applications to the bank. Additionally, loan-related information is provided for online insurance users based on their browsing logs. The loan application rate is thus expected to increase, along with interest revenue. Originality/value - As long as the policy proceeds, the interest income from the policy loan seems to be a good option for extending insurance company operational earnings. Understanding the characteristics of loan applicants will provide helpful information. Besides, the proposed mechanism will be more appropriate to online users, who are unwilling to deal with unwanted information. | [
"loans",
"insurance companies",
"data handling",
"taiwan"
] | [
"P",
"P",
"M",
"U"
] |
52BpKaJ | The pros and cons of computing the h-index using Google Scholar | Purpose - A previous paper by the present author described the pros and cons of using the three largest cited reference enhanced multidisciplinary databases and discussed and illustrated in general how the theoretically sound idea of the h-index may become distorted depending on the software and the content of the database(s) used, and the searchers' skill and knowledge of the database features. The aim of this paper is to focus on Google Scholar (GS), from the perspective of calculating the h-index for individuals and journals. Design/methodology/approach - A desk-based approach to data collection is used and critical commentary is added. Findings - The paper shows that effective corroboration of the h-index and its two component indicators can be done only on persons and journals with which a researcher is intimately familiar. Corroborative tests must be done in every database for important research. Originality/value - The paper highlights the very time-consuming process of corroborating data, tracing and counting valid citations and points out GS's unscholarly and irresponsible handling of data. | [
"databases",
"information retrieval",
"search engines",
"referencing"
] | [
"P",
"U",
"U",
"U"
] |
1o-6:EH | A thermo-electrical problem with a nonlocal radiation boundary condition | A coupled problem arising in induction heating furnaces is studied. The thermal problem, which involves a change of phase, has a nonlocal radiation boundary condition. Convective heat transfer in the liquid is also included which makes necessary to compute the liquid motion. For the space discretization, we propose finite element methods which are combined with characteristics methods in the thermal and flow models to handle the convective terms. In the electromagnetic model they are coupled with boundary element methods (BEM/FEM). An iterative algorithm is introduced for the whole coupled model and numerical results for an industrial induction furnace are presented. (C) 2010 Elsevier Ltd. All rights reserved. | [
"induction heating",
"bem/fem",
"numerical simulation",
"nonlocal radiation condition",
"eddy currents",
"phase change"
] | [
"P",
"P",
"M",
"R",
"U",
"R"
] |
2V-KCAL | Evaluating the effect of health warnings in influencing Australian smokers' psychosocial and quitting behaviours using fuzzy causal network | This paper explores the application of fuzzy causal networks (FCNs) to evaluating effect of health warnings in influencing Australian smokers' psychosocial and quitting behaviour. The sample data used in this study are selected from the International Tobacco Control Policy Evaluation Survey project. Our research findings have demonstrated that new health warnings implemented in Australia have obvious impacts on smokers' psychosocial and quitting behaviours. FCN is a useful framework to investigate such impacts that overcome the limitation of using traditional statistical techniques, such as linear regression and logistics regression, to analyse non-linear data. (c) 2010 Elsevier Ltd. All rights reserved. | [
"fuzzy causal network",
"tobacco control",
"knowledge discovery",
"decision support",
"public health"
] | [
"P",
"P",
"U",
"U",
"M"
] |
inyh9Zt | BrownRobinson method for interval matrix games | In this paper, two-person interval matrix games are considered, and by means of acceptability index, BrownRobinson method to find a mixed-strategy equilibrium is adapted to interval matrix games. Numerical examples are also given. | [
"brownrobinson method",
"interval matrix game",
"acceptability index"
] | [
"P",
"P",
"P"
] |
2JnZNM& | spoken dialogue interfaces | This introductory tutorial overviews recent advancement and current efforts in the integration of speech processing with other components of spoken-dialogue systems. It examines important results in designing, constructing, and evaluating complete conversational systems that integrate speech recognition and synthesis with other enabling technologies. Among the disciplines contributing material for the course are, therefore, speech recognition and synthesis, but also natural language processing, user-interface design, machine translation, planning and plan recognition, gesture analysis, computational discourse, and usability evaluation. The full-day course is comprised of four sessions including an introduction to the state of the art, review of existing spoken interface systems, the integration of speech processing with other interaction modalities, and a closing session on evaluation methods, tools for developing spoken dialogue systems, and other issues affecting the spoken interface community. | [
"dialogue",
"speech",
"natural language",
"conversational interfaces"
] | [
"P",
"P",
"P",
"R"
] |
3kPzCZ3 | Modeling and optimizing traffic light settings in road networks | We discuss continuous traffic flow network models including traffic lights. A mathematical model for traffic light settings within a macroscopic continuous traffic flow network is presented, and theoretical properties are investigated. The switching of the traffic light states is modeled as a discrete decision and is subject to optimization. A numerical approach for the optimization of switching points as a function of time based upon the macroscopic traffic flow model is proposed. The numerical discussion relies on an equivalent reformulation of the original problem as well as a mixed-integer discretization of the flow dynamics. The large-scale optimization problem is solved using derived heuristics within the optimization process. Numerical experiments are presented for a single intersection as well as for a road network. | [
"optimization",
"traffic networks",
"discretized conservation laws",
"mixed-integer programming"
] | [
"P",
"R",
"M",
"M"
] |
-XV7Sde | An anthropomorphic controlled hand prosthesis system | Based on HIT/DLR (Harbin Institute of Technology/Deutsches Zentrum fur Luft- und Raumfahrt) Prosthetic Hand II, an anthropomorphic controller is developed to help the amputees use and perceive the prosthetic hands more like people with normal physiological hands. The core of the anthropomorphic controller is a hierarchical control system. It is composed of a top controller and a low level controller. The top controller has been designed both to interpret the amputee's intensions through electromyography (EMG) signals recognition and to provide the subject-prosthesis interface control with electro-cutaneous sensory feedback (ESF), while the low level controller is responsible for grasp stability. The control strategies include the EMG control strategy, EMG and ESF closed loop control strategy, and voice control strategy. Through EMG signal recognition, 10 types of hand postures are recognized based on support vector machine (SVM). An anthropomorphic closed loop system is constructed to include the customer, sensory feedback system, EMG control system, and the prosthetic hand, so as to help the amputee perform a more successful EMG grasp. Experimental results suggest that the anthropomorphic controller can be used for multi-posture recognition, and that grasp with ESF is a cognitive dual process with visual and sensory feedback. This process while outperforming the visual feedback process provides the concept of grasp force magnitude during manipulation of objects. | [
"anthropomorphic controller",
"prosthetic hand",
"electro-cutaneous sensory feedback",
"emg recognition"
] | [
"P",
"P",
"P",
"R"
] |
2yQxVUP | Logics for approximate and strong entailments | We consider two kinds of similarity-based reasoning and formalise them in a logical setting. In one case, we are led by the principle that conclusions can be drawn even if they are only approximately correct. This leads to a graded approximate entailment, which is weaker than classical entailment. In the other case, we follow the principle that conclusions must remain correct even if the assumptions are slightly changed. This leads to a notion of a graded strong entailment, which is stronger than classical entailment. We develop two logical calculi based on the notions of approximate and of strong entailment, respectively. (C) 2011 Elsevier B.V. All rights reserved. | [
"strong entailment",
"similarity-based reasoning",
"approximate entailment",
"non-classical logics"
] | [
"P",
"P",
"P",
"M"
] |
3xaGCAc | WORKPLACE MANAGEMENT AND EMPLOYEE MISUSE: DOES PUNISHMENT MATTER? | With the ubiquitous deployment of Internet, workplace Internet misuse has raised increasing concern for organizations. Research has demonstrated employee reactions to monitoring systems and how they are implemented. However, little is known about the impact of punishment-related policies on employee intention to misuse Internet. To extend this line of research beyond prior studies, this paper proposes an integrated research model applying Theory of Planned Behavior, Deterrence Theory, and Theory of Ethics to examine the impact of punishment-related policy on employees' Internet misuse intentions. The results indicate that perceived importance, perceived behavioral control and subjective norms have significant influence on employee intention to avoid Internet misuse. Contrary to expectations, there is no support for the influence of punishment severity and punishment certainty. | [
"internet misuse",
"monitoring",
"workplace internet use monitoring",
"behavioral intentions"
] | [
"P",
"P",
"M",
"R"
] |
M-gi-uQ | Making the leap to a software platform strategy: Issues and challenges | While there are many success stories of achieving high reuse and improved quality using software platforms, there is a need to investigate the issues and challenges organizations face when transitioning to a software platform strategy. This case study provides a comprehensive taxonomy of the challenges faced when a medium-scale organization decided to adopt software platforms. The study also reveals how new trends in software engineering (i.e. agile methods, distributed development, and flat management structures) interplayed with the chosen platform strategy. We used an ethnographic approach to collect data by spending time at a medium-scale company in Scandinavia. We conducted 16in-depth interviews with representatives of eight different teams, three of which were working on three separate platforms. The collected data was analyzed using Grounded Theory. The findings identify four classes of challenges, namely: business challenges, organizational challenges, technical challenges, and people challenges. The article explains how these findings can be used to help researchers and practitioners identify practical solutions and required tool support. The organizations decision to adopt a software platform strategy introduced a number of challenges. These challenges need to be understood and addressed in order to reap the benefits of reuse. Researchers need to further investigate issues such as supportive organizational structures for platform development, the role of agile methods in software platforms, tool support for testing and continuous integration in the platform context, and reuse recommendation systems. | [
"software platform",
"grounded theory",
"software reuse",
"platform challenges",
"ethnographic study"
] | [
"P",
"P",
"R",
"R",
"R"
] |
-1XRKkH | Communication time delay estimation for load frequency control in two-area power system | Due to increased size and complexity of power system network, the stability and load frequency control (LFC) is of serious concern in a wide area monitoring system (WAMS) having obtained signals from phasor measurement unit (PMU). The quality of services (QoS) for communication infrastructure in terms of signal delay, packet loss probability, queue length, throughput is very important and must be considered carefully in the WAMS based thermal power system. However, very few studies have been presented that includes QoS for communication infrastructure in load frequency control (LFC) of power system. So this paper presents LFC for two area thermal power system based on estimated time delay and packet loss probability using the Markovian approach. The delay and packet loss probability are modeled by different math functions. Normally, frequency deviation signal is transmitted from remote terminal unit (RTU) to control center and from control center to individual control unit of plants. The delay incurred is located in the forward loop of PSO based PI/PID controller in the form of transport delay. To verify the efficacy of controller performance, the estimated constant delay and time varying delay are applied to the controller in the two area thermalthermal power system with and without governor dead band (GDB) and generation rate constraints (GRC) for various loads conditions. The study is further demonstrated for time delay, being compensated by 2nd order Pad approximation. The results show that frequency deviation is minimum in terms of stability and transient response. | [
"load frequency control",
"markovian approach",
"communication delay",
"packet loss probability and particle swarm optimization"
] | [
"P",
"P",
"R",
"M"
] |
444Uf32 | finding similar identities among objects from multiple web sources | When integrating data from multiple Web sources, objects can exist in different formats and structures, making it difficult to identify those that can be matched together. In this paper, we propose an identification approach to finding similar identities among objects from multiple Web sources. In this approach, object identification works like the relational join operation where a similarity function takes the place of the equality condition. This similarity function is based on information retrieval techniques. Our approach differs from others in the literature since it can be used to identify objects more complexly structured (e.g., XML documents) and not only objects with a flat structure such as relations. The effectiveness of our approach is demonstrated by experimental results with real Web data sources from different domains, that reach precision levels above 75\%. | [
"similarity",
"web data integration"
] | [
"P",
"R"
] |
3mnK851 | Timed net with choice probability and its minimum cycle time - The case of location-based service | For performance analysis of computer systems, the minimum cycle time method has been widely used. The minimum cycle time method is a mathematical technique with which we can find the minimum duration time needed to fire all the transitions at least once and coming back to the initial marking in a timed net. A timed net is a modified version of a Petri net, where a transition is associated with a delay time. In the real world, an event A is usually in the conflict relation with another event B, i.e. if event A is selected to occur then event B cannot occur. When events A and B are in conflict, they are usually associated with probability of choices. However, a timed net is not equipped with any facility of specifying probabilities of event choices. Therefore, the minimum cycle time method applied on a timed net is apt to overlook probabilities of event choices and yield a wrong result. We are proposing 'Timed net with Choice Probability', where a transition can be associated with both delay time and a probability of choice. We also introduce an algorithm for minimum cycle time analysis for 'Timed net with Choice Probability'. As an example of application, we are performing an analysis of a location-based service system using 'Timed net with Choice Probability'. (C) 2005 Elsevier Ltd. All rights reserved. | [
"minimum cycle time",
"location-based service",
"performance analysis",
"petri net",
"timm net",
"system analysis"
] | [
"P",
"P",
"P",
"P",
"M",
"R"
] |
3iQAy:A | Priority Based Scheduling in WiMAX System | In this paper, we propose an approach which combines the priority scheduling with the resource allocation over the WiMAX wireless network to increase the throughput and reduce the delay between the base station and subscriber stations. The simulation is conducted by using Network Simulator-2 to compare the performance of the proposed scheme with other scheduling schemes. The results show that our proposed scheme experiences acceptable delay for different serviceflow types as compared with other schemes. Furthermore, the uplink throughput of the proposed scheme is about 1.5 times better than other schemes. | [
"priority",
"scheduling",
"wimax",
"resource allocation",
"ieee802.16"
] | [
"P",
"P",
"P",
"P",
"U"
] |
2J5LidP | Reactiondiffusion model Monte Carlo simulations on the GPU | We created an efficient algorithm suitable for graphics processing units (GPUs) to perform Monte Carlo simulations on a subset of reactiondiffusion models. The set of reactiondiffusion models that the algorithm is applied to represents a seemingly simplistic set of problems on a one-dimensional lattice, where each site contains either a particle or is empty. However, these systems exhibit non-equilibrium phase transitions, with very large finite-time corrections, which mandates a fast algorithm to simulate them. The algorithm presented here uses techniques that are specific to GPU programming, and combines these with multispin coding to create one of the fastest algorithms for reactiondiffusion models. As an example, the algorithm is applied to the pair contact process with diffusion (PCPD). Compared to a simple algorithm on the CPU, our GPU algorithm is approximately 4000 times faster. The GPU algorithm is roughly 55 times faster than an optimized version for the CPU. | [
"reactiondiffusion models",
"monte carlo simulations",
"gpu",
"pcpd"
] | [
"P",
"P",
"P",
"P"
] |
-SjFAvZ | Applications of linguistic techniques for use case analysis | Use cases are effective techniques to express the functional requirements of a system in a very simple and easy-to-learn way. Use cases are mainly composed of natural language (NL) sentences, and the use of NL to describe the behaviour of a system is always a critical point, due to the inherent ambiguities originating from the different possible interpretations of NL sentences. We discuss in this paper the application of analysis techniques based on a linguistic approach to detect, within requirements documents, defects related to such an inherent ambiguity. Starting from the proposed analysis techniques, we will define some metrics that will be used to perform a quality evaluation of requirements documents. Some available automatic tools supporting the linguistic analysis of NL requirements have been used to evaluate an industrial use cases document according to the defined metrics. A discussion on the application of linguistic analysis techniques to support the semantic analysis of use cases is also reported. | [
"use cases",
"quality evaluation of requirements",
"natural language processing",
"requirements engineering"
] | [
"P",
"P",
"M",
"M"
] |
1a3EcUU | An intelligent scheduling system using fuzzy logic controller for management of services in WiMAX networks | The appearance of media applications with high bandwidth and quality of service requirements has made a significant impact in telecommunications technology. In this direction, the IEEE802.16 has defined wireless access systems called WiMAX. These systems provide high-speed communications over a long distance. For this purpose some service classes with QoS requirements are defined; but the QoS scheduler is not standardized in IEEE802.16. The scheduling mechanism has a significant effect on the performance of WiMAX systems for use of bandwidth and radio resources. Some scheduling algorithms have been introduced by researchers; but they only provide some limited aspects of QoS. An intelligent decision support system is therefore necessary for scheduling. In this paper a fuzzy based scheduling system is proposed for compounds of real-time and non-real-time polling services which provide QoS requirements and fairness in dynamic conditions. A series of simulation experiments have been carried out to evaluate the performance of the proposed scheduling algorithm in terms of latency and throughput QoS parameters. The results show that the proposed method performs effectively regarding both of these criteria and achieves proportional system performance and fairness among different types of traffic. | [
"wimax",
"ieee802.16",
"fuzzy scheduling",
"services management",
"bandwidth assignment",
"qos guarantee"
] | [
"P",
"P",
"R",
"R",
"M",
"M"
] |
35MF5d1 | Air pollution modelling using a Graphics Processing Unit with CUDA | The Graphics Processing Unit (GPU) is a powerful tool for parallel computing. In the past years the performance and capabilities of GPUs have increased, and the Compute Unified Device Architecture (CUDA) a parallel computing architecture has been developed by NVIDIA to utilize this performance in general purpose computations. Here we show for the first time a possible application of GPU for environmental studies serving as a basement for decision making strategies. A stochastic Lagrangian particle model has been developed on CUDA to estimate the transport and the transformation of the radionuclides from a single point source during an accidental release. Our results show that parallel implementation achieves typical acceleration values in the order of 80120 times compared to CPU using a single-threaded implementation on a 2.33 GHz desktop computer. Only very small differences have been found between the results obtained from GPU and CPU simulations, which are comparable with the effect of stochastic transport phenomena in atmosphere. The relatively high speedup with no additional costs to maintain this parallel architecture could result in a wide usage of GPU for diversified environmental applications in the near future. | [
"air pollution",
"cuda",
"parallel computing",
"environmental application",
"video card"
] | [
"P",
"P",
"P",
"P",
"U"
] |
-h1svjr | Factored sequence kernels | In this paper we propose an extension of sequence kernels to the case where the symbols that define the sequences have multiple representations. This configuration occurs, for instance, in natural language processing, where words can be characterized according to different linguistic dimensions. The core of our contribution is to integrate early the different representations in the kernel, in a way that generates rich composite features defined across the various symbol dimensions. | [
"sequence kernels",
"machine learning",
"kernel methods",
"factored kernels",
"language modeling"
] | [
"P",
"U",
"M",
"R",
"M"
] |
17R7Ms1 | An algebraic boundary orthogonalization procedure for structured grids | An algebraic procedure for grid orthogonalization has been developed. It is often difficult to include both grid clustering and orthogonalization in a grid generation method. Often the degree and extent of orthogonality are hard to control when orthogonalization is included in a complicated grid generation method. Fortunately, grid orthogonalization can be performed independently of grid generation. The orthogonalization method developed is simple and includes invertibility control. Copyright (C) 2000 John Wiley & Sons, Ltd. | [
"structured",
"grid generation",
"computational fluid dynamics"
] | [
"P",
"P",
"U"
] |
4eV7ur& | ERKN integrators for systems of oscillatory second-order differential equations | For systems of oscillatory second-order differential equations y '' + My = f with M is an element of R(m x m), a symmetric positive semi-definite matrix, X. Wu et al. have proposed the multidimensional ARKN methods [X. Wu, X. You, J. Xia, Order conditions for ARKN methods solving oscillatory systems, Comput. Phys. Comm. 180 (2009) 2250-2257], which are an essential generalization of J.M. Franco's ARKN methods for one-dimensional problems or for systems with a diagonal matrix M = W(2)l [J.M. Franco, Runge-Kutta-Nystrom methods adapted to the numerical integration of perturbed oscillators, Comput. Phys. Comm. 147 (2002) 770-787]. One of the merits of these methods is that they integrate exactly the unperturbed oscillators y '' + My = 0. Regretfully, even for the unperturbed oscillators the internal stages Y(i) of an ARKN method fail to equal the values of the exact solution y(t) at t(n) + c(i)h, respectively. Recently H. Yang et al. proposed the ERKN methods to overcome this drawback [H.L. Yang, X.Y. Wu, Xiong You, Yonglei Fang, Extended RKN-type methods for numerical integration of perturbed oscillators, Comput. Phys. Comm. 180 (2009) 1777-1794]. However, the ERKN methods in that paper are only considered for the special case where M is a diagonal matrix with nonnegative entries. The purpose of this paper is to extend the ERKN methods to the general case with M is an element of R(m x m), and the perturbing function f depends only on y. Numerical experiments accompanied demonstrates that the ERKN methods are more efficient than the existing methods for the computation of oscillatory systems. In particular, if M is an element of R(m x m) is a symmetric positive semi-definite matrix, it is highly important for the new ERKN integrators to show the energy conservation in the numerical experiments for problems with Hamiltonian H(p.q) = 1/2 p(T) p + 1/2 q(T) Mq + V(q) in comparison with the well-known methods in the scientific literature. Those so called separable Hamiltonians arise ill many areas of physical sciences, e.g., macromolecular dynamics, astronomy, and classical mechanics. (C) 2010 Elsevier B.V. All rights reserved. | [
"erkn integrators",
"order conditions",
"oscillatory systems",
"b-series",
"hamiltonian systems",
"nonlinear wave equations"
] | [
"P",
"P",
"P",
"U",
"R",
"M"
] |
4iqhU8T | MUSIPER: a system for modeling music similarity perception based on objective feature subset selection | We explore the use of objective audio signal features to model the individualized (subjective) perception of similarity between music files. We present MUSIPER, a content-based music retrieval system which constructs music similarity perception models of its users by associating different music similarity measures to different users. Specifically, a user-supplied relevance feedback procedure and related neural network-based incremental learning allows the system to determine which subset of a set of objective features approximates more accurately the subjective music similarity perception of a specific user. Our implementation and evaluation of MUSIPER verifies the relation between subsets of objective features and individualized music similarity perception and exhibits significant improvement in individualized perceived similarity in subsequent music retrievals. | [
"music similarity perception",
"individualization",
"relevance feedback",
"user model",
"user driven feature selection",
"content-based retrieval"
] | [
"P",
"P",
"P",
"R",
"M",
"R"
] |
2Vgw42x | Wyner-Ziv Coding Over Broadcast Channels: Hybrid Digital/Analog Schemes | A new hybrid digital/analog scheme is proposed for lossy transmission of a Gaussian source over a bandwidth-matched Gaussian broadcast channel with source side information available at each receiver. The proposed scheme combines two schemes that were previously shown to achieve optimal point-to-point distortion/power tradeoff simultaneously at all receivers under two distinct conditions stated in terms of channel and side information quality parameters. For the two-receiver case, the combined scheme is shown to achieve the same kind of optimality for the entire region in the parameter space sandwiched between those two conditions. Crucial to this result is a new degree of freedom discovered in designing point-to-point hybrid digital/analog schemes with side information. When superimposed with analog transmission, the proposed scheme outperforms all previously known schemes even outside the optimality region in the parameter space. | [
"wyner-ziv coding",
"broadcast channels",
"costa coding",
"hybrid digital/analog coding",
"joint source-channel coding",
"writing on dirty paper"
] | [
"P",
"P",
"M",
"R",
"M",
"U"
] |
-a6kt4H | An Experimental Test Bed for Small Unmanned Helicopters | This paper introduces a custom experimental test bed for the evaluation of autonomous flight controllers for unmanned helicopters. The development of controllers for unmanned helicopters is a difficult procedure which involves testing through simulation at first, and then actual experimentation on real vehicles. As simulation cannot accurately represent the exact real flight conditions and the dangers involved in them, the suggested test bed fills the gap between simulation runs and experimental flights. The developed system involves a small helicopter mounted on a flying stand, equipped with a set of sensors for real-time flight monitoring and control. For demonstration purposes, the test bed has been used for design and validation of a fuzzy logic based autopilot, able to perform hovering and altitude control. Experimental results are presented and commented for various test cases. | [
"experimental test bed",
"unmanned helicopters",
"flight control",
"aerial robotics",
"fuzzy control"
] | [
"P",
"P",
"P",
"U",
"R"
] |
-KBenSw | An Overview of the Use of Metadata in Agriculture | Metadata is data that fully describes the data and the areas they represent, allowing the user to decide on their use as best as possible. Allow reporting on the existence of a set of data linked to specific needs. The use of metadata has the purpose of documenting and organizing a structured organizational data in order to minimize duplication of efforts to locate them and to facilitate maintenance. It also provides the administration of large amounts of data, discovery, retrieval and editing features. The global use of metadata is regulated by a technical group or task force composed of several segments such as industries, universities and research firms. Agriculture in particular is a good example for the development of typical applications using metadata is the integration of systems and equipment, allowing the implementation of techniques used in precision agriculture, the integration of different computer systems via webservices or other type of solution requires the integration of structured data. The purpose of this paper is to present an overview of the standards of metadata areas consolidated as agricultural. | [
"fao",
"agroxml",
"doublin core",
"tdwg"
] | [
"U",
"U",
"U",
"U"
] |
1q1vaX2 | Epidermal and Dermal Characteristics in Skin Equivalent after Systemic and Topical Application of Skin Care Ingredients | Abstract:?Effects of active ingredients from topical and systemic skincare products on structure and organization of epidermis, dermalepidermal junction (DEJ), and dermis were examined using an in vitro reconstructed skin equivalent (SE). Imedeen Time Perfection (ITP) ingredients (a mixture of BioMarine Complex, grape seed extract, tomato extract, vitamin C) were supplemented systemically into culture medium. Kinetin, an active ingredient from Imedeen Expression Line Control Serum, was applied topically. Both treatments were tested separately or combined. In epidermis, all treatments stimulated keratinocyte proliferation, showing a significant increase of Ki67-positive keratinocytes (P < 0.05). Kinetin showed a twofold increase of Ki67-positive cells, ITP resulted in a fivefold, and ITP+kinetin showed a nine-fold increase. Differentiation of keratinocytes was influenced only by kinetin since filaggrin was found only in kinetin and kinetin+ITP samples. At the DEJ, laminin 5 was slightly increased by all treatments. In dermis, only ITP increased the amount of collagen type I. Both kinetin and ITP stimulated formation of fibrillin-1 and elastin deposition. The effect of kinetin was seen in upper dermis. It stimulated not only the amount of deposited fibrillin-1 and elastin fibers but also their organization perpendicularly to the DEJ. ITP stimulated formation of fibrillin-1 in deeper dermis. In summary, the combination of topical treatment with kinetin and systemic treatment with ITP had complementary beneficial effects in the formation and development of epidermis and dermis. | [
"skin equivalent",
"imedeen",
"kinetin",
"keratinocyte proliferation",
"mimeskin",
"dermal matrix"
] | [
"P",
"P",
"P",
"P",
"U",
"M"
] |
4d:JHkt | an nc algorithm for finding a maximal acyclic set in a graph | An acyclic set in an undirected graph G = ( V,E ) is a set A ? V ( G ) such that the induced graph on A is acyclic. A maximal acyclic set (MAS) has the further property that no vertex can be added to it without creating a cycle in the induced graph. We present the first NC algorithm that finds an MAS in any graph, running in time O (log 4 n ) using O (( m 2 + n )/log n ) processors on a EREW PRAM, answering questions raised by Chen and He. Our NC algorithm for finding an MAS in a graph relies on a novel NC algorithm for finding a maximal forest in a hypergraph which may be of independent interest. | [
"nc algorithms",
"maximal acyclic set",
"maximal forest",
"graph algorithms",
"hypergraph algorithms"
] | [
"P",
"P",
"P",
"R",
"R"
] |
-ovCb&U | Bayesian system reliability assessment under the vague environment | Classical reliability assessment is based largely on precise information. In practice, however, some information about an underlying system might be imprecise and represented in the form of vague quantities. Thus, it is necessary to generalize the classical methods to vague environments for studying and analyzing the systems of interests. On the other hand, Bayesian approaches have shown to be useful when there is some prior information about the underlying model. In this paper, Bayesian system reliability assessment is investigated in vague environments. To employ the Bayesian approach, model parameters are assumed to be vague random variables with vague prior distributions. This approach will be used to create the vague Bayes estimate of system reliability by introducing and applying a theorem called Resolution Identity for vague sets. We also investigate a computational procedure to evaluate the vague Bayes estimate of system reliability. For this purpose, the original problem is transformed into a nonlinear programming problem which is then divided up into eight subproblems to simplify computations. Finally, the results obtained for the subproblems can be used to determine the membership functions of the vague Bayes estimate of system reliability. Two practical examples are provided to clarify the proposed approach. | [
"system reliability",
"bayes estimator",
"nonlinear programming",
"fuzzy reliability",
"mellin transform",
"vague number"
] | [
"P",
"P",
"P",
"M",
"M",
"M"
] |
j-X5xTL | lattice-based adaptive random testing | Adaptive Random Testing (ART) denotes a family of testing algorithms that have a better performance compared to pure random testing with respect to the number of test cases necessary to detect the first failure. Many of these algorithms, however, are not very efficient regarding runtime. A new ART algorithm is presented that has a better performance than all other ART methods for the block failure pattern. Its runtime is linear in the number of test cases selected, which is nearly as efficient as pure random testing, as opposed to most other ART methods. This new ART algorithm selects the test cases based on a lattice. | [
"adaptive random testing",
"random testing",
"test case selection"
] | [
"P",
"P",
"P"
] |
1A5UuPG | Study of electrochemical etch-stop for high-precision thickness control of single-crystal Si in aqueous TMAH : IPA : pyrazine solutions | In this paper, we describe a method of controlling the thickness of single-crystal Si membranes, fabricated by wet anisotropic etching in aqueous tetramethyl ammonium hydroxide (TMAH) : isopropyl alcohol (IPA) : pyrazine solutions. The Si surface of the etch-stopped microdiaphragm is extremely flat with no noticeable taper or nonuniformity. The benefits of the electrochemical etch-stop method for the etching of n epilayer-embedded p-type single-crystal Si(001) wafers in aqueous TMAH became apparent when the reproducibility of the microdiaphragms thickness in mass production was realized. The results indicated that the use of the electrochemical etch-stop method for the etching of Si in aqueous TMAH provided a powerful and versatile alternative process for the fabrication of high-yield Si microdiaphragms (200.26?m s.d). With etch-stop, the pressure sensitivity of devices fabricated on the same wafer can be controlled to within 2.3% s.d. | [
"electrochemical etch-stop",
"thickness control",
"si",
"tmah : ipa : pyrazine",
"pressure sensor"
] | [
"P",
"P",
"P",
"P",
"M"
] |
3W-scWP | On the design of an ECOC-Compliant Genetic Algorithm | A novel Genetic Algorithm to optimize the ECOC coding step is presented. The crossover and mutation operators are redefined taking into account the ECOC properties. A new operator that is able to extend the ECOC code is developed. We introduce a novel regularization parameter that is able to control the number of dichotomies. | [
"ecoc",
"genetic algorithms",
"multi-class classification"
] | [
"P",
"P",
"U"
] |
4rCNm3D | VERITAS: COMBINING EXPERT OPINIONS WITHOUT LABELED DATA | We consider a variation of the problem of combining expert opinions for the situation in which there is no ground truth to use for training. Even though we do not have labeled data, the goal of this work is quite different from an unsupervised learning problem in which the goal is to cluster the data. Our work is motivated by the application of segmenting a lung nodule in a computed tomography (CT) scan of the human chest. The lack of a gold standard of truth is a critical problem in medical imaging. A variety of experts, both human and computer algorithms, are available that can mark which voxels are part of a nodule. The question is, how to combine these expert opinions to estimate the unknown ground truth. We present the Veritas algorithm that predicts the underlying label using the knowledge in the expert opinions even without the benefit of any labeled data for training. We evaluate Veritas using artificial data and real CT images to which synthetic nodules have been added, providing a known ground truth. | [
"combining experts",
"medical images",
"machine learning",
"boosting",
"interobserver variability"
] | [
"P",
"P",
"M",
"U",
"U"
] |