id
stringlengths 7
7
| title
stringlengths 3
578
| abstract
stringlengths 0
16.7k
| keyphrases
sequence | prmu
sequence |
---|---|---|---|---|
3PFiPFr | WiFi Auto Configuration Architecture Using ZigBee RF4CE for Pervasive Environments | This paper presents a novel architecture to provide a simple and easy control and sharing between various CE and IT devices for realizing a smart digital home using RF4CE that is based on IEEE 802.15.4. The proposed architecture is working on RF4CE which is a reliable bidirectional communication standard and also optimizes Zigbee protocol in order to provide a simple and energy efficient networking stack and standard application profiles in creating a multi-vendor interoperable solution for home entertainment and monitoring. In proposed architecture, RF4CE protocol provides a key idea to establish an initial network between devices. After that, the proposed architecture is probing a connection type from devices, and then sharing configuration information to make a zero-configuration, i.e., Wi-Fi connectivity setup. In order to show the feasibility of the proposed architecture, we make a development various H/W prototypes and implementation results demonstrate a mobile terminal can control other devices and provide contents sharing between the devices. Furthermore, based on the RF4CE-based zero-configuration architecture, it can be extended to support smart connectivity in a smart digital home with various wireless technologies like Wi-Fi, 13luetooth, UWB, and so on. | [
"zigbee",
"rf4ce",
"ieee 802.15.4",
"reliability",
"energy consumption",
"wireless sensor network",
"smart home control"
] | [
"P",
"P",
"P",
"P",
"M",
"M",
"R"
] |
3UQSry& | A practical approach to credit scoring | This paper proposes a DEA-based approach to credit scoring. Compared with conventional models such as multiple discriminant analysis, logistic regression analysis, and neural networks for business failure prediction, which require extra a priori information, this new approach solely requires ex-post information to calculate credit scores. For the empirical evidence, this methodology was applied to current financial data of external audited 1061 manufacturing firms comprising the credit portfolio of one of the largest credit guarantee organizations in Korea. Using financial ratios, the methodology could synthesize a firms overall performance into a single financial credibility score. The empirical results were also validated by supporting analyses (regression analysis and discriminant analysis) and by testing the models discriminatory power using actual bankruptcy cases of 103 firms. In addition, we propose a practical credit rating method using the predicted DEA scores. | [
"credit scoring",
"credit rating",
"data envelopment analysis"
] | [
"P",
"P",
"M"
] |
1vm4vCC | The effects of etching and deposition on the performance and stress evolution of open through silicon vias | The effects of silicon etching using the Bosch process and LPCVD oxide deposition on the performance of open TSVs are analyzed through simulation. Using an in-house process simulator, a structure is generated which contains scalloped sidewalls as a result of the Bosch etch process. During the LPCVD deposition step, oxide is expected to be thinner at the trench bottom when compared to the top; however, additional localized thinning is observed around each scallop. The scalloped structure is compared to a structure where the etching step is not performed, but rather a flat trench profile is assumed. Both structures are imported into a finite element tool in order to analyze the effects of processing on device performance. The scalloped structure is shown to have an increased resistance and capacitance when compared to the flat TSV. Additionally, the scalloped TSV does not perform as well at high frequencies, where the signal loss is shown to increase. However, the scallops allow the TSV to respond better to an applied stress. This is due to the scallops enhanced range of motion and displacement, meaning they can compensate for the stress along the entire sidewall and not only on the TSV top, as in the flat structure. | [
"through-silicon via sidewall scallops",
"3d integration",
"bosch etching simulations",
"low pressure chemical vapor deposition",
"tsv electrical performance",
"tsv stress response"
] | [
"M",
"U",
"R",
"M",
"M",
"M"
] |
:Ep-6Dd | How does social software change knowledge management? Toward a strategic research agenda ? | Knowledge management is commonly understood as IS implementations that enable processes of knowledge creation, sharing, and capture. Knowledge management at the firm level is changing rapidly. Previous approaches included centrally managed, proprietary knowledge repositories, often involving structured and controlled search and access. Today the trend is toward knowledge management by social software, which provides open and inexpensive alternatives to traditional implementations. While social software carries great promise for knowledge management, this also raises fundamental questions about the very essence and value of firm knowledge, the possibility for knowledge protection, firm boundaries, and the sources of competitive advantage. I draft a strategic research agenda consisting of five fundamental issues that should reinvigorate research in knowledge management. | [
"social software",
"knowledge management",
"knowledge-based view",
"information systems",
"open innovation"
] | [
"P",
"P",
"U",
"U",
"M"
] |
52GzLcL | A note of caution regarding anthropomorphism in HCI agents | Universal usability is an important component of HCI, particularly as companies promote their products in increasingly global markets to users with diverse cultural backgrounds. Successful anthropomorphic agents must have appropriate computer etiquette and nonverbal communication patterns. Because there are differences in etiquette, tone, formality, and colloquialisms across different user populations, it is unlikely that a generic anthropomorphic agent would be universally appealing. Additionally, because anthropomorphic characters are depicted as capable of human reasoning and possessing human motivations, users may ascribe undue trust in these agents. Trust is a complex construct that exerts an important role in a users interactions with an interface or system. Feelings and perceptions about an anthropomorphic agent may impact the construction of a mental model about a system, which may lead to inappropriate calibrations of automation trust that is based on an emotional connection with the anthropomorphic agent rather than on actual system performance. | [
"anthropomorphism",
"hci",
"agent",
"universal usability",
"trust",
"affect as information"
] | [
"P",
"P",
"P",
"P",
"P",
"M"
] |
-PCaNLB | An empirical investigation of six levels of enterprise resource planning integration | Proposes a six-level ERP integration model based on empirical examination. Empirical data was collected through a large-scale survey of ERP professionals. The survey data are used with the partial least squares (PLSs) analysis. Finding shows that six-level ERP integration model is validated. Academic and practical implications are discussed. | [
"erp",
"levels of integration",
"enterprise systems",
"islands-of-technology",
"global integration"
] | [
"P",
"R",
"M",
"U",
"M"
] |
uUYF5LA | Fully automatic model-based calcium segmentation and scoring in coronary CT angiography | The paper presents new methods for automatic coronary calcium detection, segmentation and scoring in coronary CT angiography (cCTA) studies. | [
"coronary ct angiography",
"calcium scoring",
"computed tomography"
] | [
"P",
"R",
"U"
] |
27Kybka | Orientation Field Estimation for Latent Fingerprint Enhancement | Identifying latent fingerprints is of vital importance for law enforcement agencies to apprehend criminals and terrorists. Compared to live-scan and inked fingerprints, the image quality of latent fingerprints is much lower, with complex image background, unclear ridge structure, and even overlapping patterns. A robust orientation field estimation algorithm is indispensable for enhancing and recognizing poor quality latents. However, conventional orientation field estimation algorithms, which can satisfactorily process most live-scan and inked fingerprints, do not provide acceptable results for most latents. We believe that a major limitation of conventional algorithms is that they do not utilize prior knowledge of the ridge structure in fingerprints. Inspired by spelling correction techniques in natural language processing, we propose a novel fingerprint orientation field estimation algorithm based on prior knowledge of fingerprint structure. We represent prior knowledge of fingerprints using a dictionary of reference orientation patches. which is constructed using a set of true orientation fields, and the compatibility constraint between neighboring orientation patches. Orientation field estimation for latents is posed as an energy minimization problem, which is solved by loopy belief propagation. Experimental results on the challenging NIST SD27 latent fingerprint database and an overlapped latent fingerprint database demonstrate the advantages of the proposed orientation field estimation algorithm over conventional algorithms. | [
"orientation field",
"latent fingerprint",
"fingerprint enhancement",
"spelling correction",
"dictionary",
"fingerprint matching"
] | [
"P",
"P",
"P",
"P",
"P",
"M"
] |
58pD9Wr | patient-centric authorization framework for sharing electronic health records | In modern healthcare environments, a fundamental requirement for achieving continuity of care is the seamless access to distributed patient health records in an integrated and unified manner, directly at the point of care. However, Electronic Health Records (EHRs) contain a significant amount of sensitive information, and allowing data to be accessible at many different sources increases concerns related to patient privacy and data theft. Access control solutions must guarantee that only authorized users have access to such critical records for legitimate purposes, and access control policies from distributed EHR sources must be accurately reflected and enforced accordingly in the integrated EHRs. In this paper, we propose a unified access control scheme that supports patient-centric selective sharing of virtual composite EHRs using different levels of granularity, accommodating data aggregation and various privacy protection requirements. We also articulate and handle the policy anomalies that might occur in the composition of discrete access control policies from multiple data sources. | [
"patient-centric authorization",
"electronic health records (ehrs)",
"selective sharing"
] | [
"P",
"P",
"P"
] |
3:o5VhG | entity ranking in wikipedia | The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness. | [
"entity ranking",
"xml retrieval",
"test collection"
] | [
"P",
"M",
"M"
] |
A&RN3MS | Divisible and pure intuitionistic fuzzy subgroups and their properties | In this paper, we study divisible (pure) intuitionistic fuzzy subgroups. Firstly, we define a special intuitionistic fuzzy subset on a group. Using this intuitionistic fuzzy subset, we obtained some new properties of it. Secondly, we define divisible (pure) intuitionistic fuzzy subgroups on a commutative group and investigate some properties. Lastly, we give some applications with level subsets of a divisible (pure) intuitionistic fuzzy subgroup. | [
"and pure intuitionistic fuzzy subgroups",
"intuitionistic fuzzy subgroups",
"divisible intuitionistic fuzzy subgroups"
] | [
"P",
"P",
"R"
] |
2CYUQT3 | a two-dimensional separation of concerns for compiler construction | During language evolution, compiler construction is usually performed along two dimensions: defining new abstract syntax tree (AST) classes, or adding new operations. In order to facilitate such changes, two software design patterns (i.e., the inheritance pattern and the visitor pattern) are widely used to help modularize the language constructs. However, as each design pattern is only suitable for one dimension of extension, neither of these two patterns can independently fulfill the evolution needs during the compiler construction process. In this paper, we analyze two dimensions of concerns in compiler construction and develop a paradigm allowing compiler evolution across these two dimensions using both object-orientation and aspect-orientation. Moreover, this approach provides an ability to perform pattern transformation based on pluggable aspects. A simple implementation of an expression language and its possible extension is demonstrated using Java and AspectJ. | [
"separation of concerns",
"compiling",
"pattern transformation",
"aspect-oriented programming"
] | [
"P",
"P",
"P",
"U"
] |
4g&1nL3 | Expulsion and scheduling control for multiclass queues with heterogeneous servers | We consider an M/M/2 system with nonidentical servers and multiple classes of customers. Each customer class has its own reward rate and holding cost. We may assign priorities so that high priority customers may preempt lower priority customers on the servers. We give two models for which the optimal admission and scheduling policy for maximizing expected discounted profit is determined by a threshold structure on the number of customers of each type in the system. Surprisingly, the optimal thresholds do not depend on the specific numerical values of the reward rates and holding costs, making them relatively easy to determine in practice. Our results also hold when there is a finite buffer and when customers have independent random deadlines for service completion. | [
"multiclass queues",
"heterogeneous servers",
"optimal policy"
] | [
"P",
"P",
"R"
] |
16VEz1q | The relations of entanglement dynamics and energy distribution | We study the relations between entanglement dynamics and energy distribution for multipartite systems. We show that if two blocks S and S of a system have energy exchanges, the entanglement and energy distribution between them are closely related, and have definite relations. | [
"entanglement dynamics",
"energy distribution"
] | [
"P",
"P"
] |
-MPbdin | Sparse matrix operations on several multi-core architectures | This paper compares various contemporary multicore-based microprocessor architectures from different vendors with different memory interconnects regarding performance, speedup, and parallel efficiency. Sparse matrix decomposition is used as a benchmark application. The example matrix used in the experiments comes from an electrical engineering application, where numerical simulation of physical processes plays an important role in the design of industrial products. Within this context, thread-to-core pinning and cache optimization are two important aspects which are investigated in more detail. | [
"pinning",
"cache optimization",
"multicore",
"performance optimization",
"sparse matrices"
] | [
"P",
"P",
"U",
"R",
"M"
] |
1TZpGw: | Consulting support during conceptual database design in the presence of redundancy in requirements specifications: an empirical study | This study examines the efficacy of a consulting system for designing conceptual databases in reducing data modelling errors. Seventy-two subjects participated in an experiment requiring modelling of two tasks using the consulting system. About half the subjects used the treatment version and the other half used the control version. The control version resembled the treatment version in the look and feel of the interface; however, it did not embed the rules and heuristics that were included in the treatment version. Research findings suggest that subjects using the treatment version significantly outscored their control version counterparts. There was an interaction effect between system and prior knowledge-subjects who scored low in a pre-test benefited the most from the treatment version. This study has demonstrated that a consulting system can significantly reduce the incidence of errors committed by designers engaged in conceptual database modelling. Further, the system is robust and can prevent errors even in the presence of redundancy in user requirements. (C) 2001 Academic Press. | [
"database design",
"redundancy",
"requirements specifications",
"consulting system",
"expert system",
"problem-solving"
] | [
"P",
"P",
"P",
"P",
"M",
"U"
] |
4dMefvg | Recursive estimation of nonparametric regression with functional covariate | The main purpose is to estimate the regression function of a real random variable with functional explanatory variable by using a recursive nonparametric kernel approach. The mean square error and the almost sure convergence of a family of recursive kernel estimates of the regression function are derived. These results are established with rates and precise evaluation of the constant terms. Also, a central limit theorem for this class of estimators is established. The method is evaluated on simulations and real dataset studies. | [
"regression function",
"almost sure convergence",
"recursive kernel estimators",
"functional data",
"quadratic error",
"asymptotic normality"
] | [
"P",
"P",
"P",
"M",
"M",
"U"
] |
-qS5UnS | Strategies for lifecycle concurrency and iteration - A system dynamics approach | Increasingly fierce commercial pressures necessitate the use of advanced software lifecycle techniques to meet growing demands on both product time-to-market and business performance. Two significant methods of achieving such improved cycle-time capability are concurrent software engineering and staged-delivery. Concurrent software engineering exploits the potential for simultaneous performance of development activities between projects, product deliveries, development phases, and individual tasks. Staged-delivery enables lifecycle iteration to supply defined chunks of product functionality at pre-planned intervals. Used effectively, these techniques provide a powerful route to reduced cycle-times, increased product quality and, potentially, lower development costs. However, the degree and manner in which these techniques should be applied remains an area for active research. This paper identifies some of the issues and open problems of incremental lifecycle management by reference to the development of aeroengine control systems within Rolls-Royce pie. We explain why system dynamics is a promising technique for evaluating strategies for lifecycle concurrency and iteration. (C) 1999 Elsevier Science Inc. All rights reserved. | [
"lifecycle concurrency",
"iteration",
"system dynamics",
"cycle-time acceleration",
"incremental development"
] | [
"P",
"P",
"P",
"M",
"R"
] |
4EKNgr6 | VLRU: Buffer management in client-server systems | In a client-server system, when LRU or its variant buffer replacement strategy is used on both the client and the server, the cache I,performance on the server side is very poor mainly because of pages duplicated in both systems. This paper introduces a server buffer replacement strategy which uses a replaced page-id than a request page-id, for the primary information for its operations. The importance of the corresponding pages in the server cache is decided according to the replaced page-ids that are delivered from clients to the server, so that locations of the pages are altered. Consequently, if a client uses LRU as its buffer replacement strategy, then the server cache is seen by the client as a long virtual client LRU cache extended to the server. Since the replaced page-id is only sent to the server by piggybacking whenever a new page fetch request is sent, the operation to deliver the replaced page-id is simple and induces a minimal overhead. We show that the proposed strategy reveals good performance characteristics in diverse situations, such as single and multiple clients, as well as with various access pat terns. | [
"lru",
"buffer replacement algorithm",
"client-server database"
] | [
"P",
"M",
"M"
] |
MP11Mo6 | Equivalences for a biological process algebra | This paper investigates Bio-PEPA, the stochastic process algebra for biological modelling developed by Ciocchetta and Hillston. It focuses on Bio-PEPA with levels where molecular counts are grouped or concentrations are discretised into a finite number of levels. Basic properties of well-defined Bio-PEPA systems are established after which equivalences used for the stochastic process algebra PEPA are considered for Bio-PEPA, and are shown to be identical for well-defined Bio-PEPA systems. Two new semantic equivalences parameterised by functions, called g-bisimilarity and weak g-bisimilarity are introduced. Different functions lead to different equivalences for Bio-PEPA. Congruence is shown for both forms of g-bisimilarity under certain reasonable conditions on the function and the use of these equivalences are demonstrated with a biologically-motivated example where two similar species are treated as a single species, and modelling of alternative pathways in the MAPK kinase signalling cascade. (C) 2011 Elsevier B.V. All rights reserved. | [
"process algebra",
"biological modelling",
"discretisation",
"semantic equivalence",
"congruence",
"parameterised bisimulation"
] | [
"P",
"P",
"P",
"P",
"P",
"M"
] |
-L4eSwv | Reducing bandwidth for robust distributed speech recognition in conditions of packet loss | This paper proposes a method to reduce the bandwidth requirements for a distributed speech recognition (DSR) system, with minimal impact on recognition performance. Bandwidth reduction is achieved by applying a wavelet decomposition to feature vectors extracted from speech using an auditory-based front-end. The resulting vectors undergo vector quantisation and are then combined in pairs for transmission over a statistically modeled channel that is subject to packet burst loss. Recognition performance is evaluated in the presence of both background noise and packet loss. When there is no packet loss, results show that the proposed method can reduce the bandwidth required to 50% of the bandwidth required for the system in which the proposed method is not used, without compromising recognition performance. The bandwidth can be further reduced to 25% of the baseline for a slight decrease in recognition performance. Furthermore, in the presence of packet loss, the proposed method for bandwidth reduction, when combined with a suitable redundancy scheme, gives a 29% reduction in bandwidth, when compared to the recognition performance of an established packet loss mitigation technique. | [
"robust distributed speech recognition",
"packet loss",
"bandwidth reduction",
"wavelet",
"auditory front-end"
] | [
"P",
"P",
"P",
"P",
"U"
] |
-Q3dUKa | moving away from a hacker vs. disciplined-based organizational legacy-an organization theory perspective on software processes | Software firms, like producers in any industry, face a spectrum of product and process choices. Historically, however, this continuum has been anchored by two contrasting approaches to software development, and these paradigms continue to dominate literature and practice today: the hacker versus a more discipline-based approach. As application type software products have become larger and more complex, former practitioners of the hacker approach have had to impose more discipline and structure on their development process. Likewise, discipline-based firms are having to learn to work more flexibly in the face of increasingly volatile competition and reduced cycle times. This paper addresses the managerial challenges presented by these converging models. Can firms overcome their legacy as either a hacker or disciplinarian without sacrificing the advantages associated with their old approach?. | [
"organization theory",
"software processes",
"software firms",
"software development",
"software products",
"hacker approach",
"managerial challenges",
"software development management",
"product choice",
"disciplined-based approach"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"M",
"R",
"R"
] |
4ibPgJV | Wavelet-PLS regression models for both exploratory data analysis and process monitoring | Two novel approaches an presented which take into account the collinearity among variables and the different phenomena occurring at different scales. This is achieved by combining partial least squares (PLS) and multiresolution analysis (MRA). In this work the two novel approaches are interconnected. First, a standard exploratory PLS model is scrutinized with MRA. In this way, different events at different scales and latent variables are recognized. In this case, especially periodic seasonal fluctuations and long-term drifting introduce problems. These low-frequency variations mask and interfere with the detection of small and moderate-level transient phenomena. As a result, the confidence limits become too wide. This relatively common problem caused by autocorrelated measurements can be avoided by detrending. In practice, this is realized by using fixed-size moving windows and by detrending these windows Based on the MRA of the standard model, the second PLS model for process monitoring is constructed based on the filtered measurements. This filtering is done by removing the low-frequency scales representing low-frequency components, such as seasonal fluctuations and other long-term variations, prior to standard PLS modeling. For these particular data the results are shown to be superior compared to a conventional PLS model based on the non-filtered measurements. Often, model updating is necessary owing to non-stationary characteristics of the process and variables. As a big advantage, this new approach seems to remove any further need for model updating, at least in this particular case. This is because the presented approach removes low-frequency fluctuations and results in a more stationary filtered data set that is more suitable for monitoring. Copyright (C) 2000 John Wiley & Sons, Ltd. | [
"exploratory data analysis",
"process monitoring",
"partial least squares",
"multiresolution analysis",
"paper and pulp industry",
"activated sludge wastewater treatment plant",
"chemometrics",
"wavelets"
] | [
"P",
"P",
"P",
"P",
"M",
"U",
"U",
"U"
] |
23-gDr: | Multilevel algorithms for 3D simulation of nonlinear elasticity problems | This study is devoted to the numerical solution of 3D elasticity problems in multilayer media. The problem is described by a coupled system of second-order nonlinear elliptic partial differential equations with strongly varying coefficients. The boundary value problem is discretized by trilinear finite elements. The goal of the paper is to analyze the performance of three hierarchical algorithms for the arisen discrete problems. The secant method is applied as a general outer nonlinear iterative procedure. The two-level block-size reduction block-incomplete LU (BSR BILU) algorithm, and the algebraic multilevel iteration (AMLI) algorithm are implemented as preconditioners to the linearized problems into the framework of the conjugate gradient method. The BSR BILU is a special case two-level algorithm while the AMLI preconditioner is based on a regular multilevel mesh refinement. A simple patched local refinement in combination with the BSR BILU algorithm is considered at the end, as an efficient approach for problems with localized zones of active interactions. The developed FEM codes are applied for 3D simulation of pile systems in weak multilayer soil media. The benchmark problem is taken from the real-life bridge engineering practice. The presented test data illustrate the abilities of the proposed methods and algorithms as well as the robustness of the codes. (C) 1999 IMACS/Elsevier Science B.V. All rights reserved. | [
"3d simulation",
"nonlinear elasticity problems",
"fem codes"
] | [
"P",
"P",
"P"
] |
456LS&o | the requirements engineer as a liaison officer in agile software development | In agile development projects, customers are facing considerable challenges because of the substantial amount and depth of their responsibility for the overall progress and success of the project. This paper proposes to extend and redefine the role of a requirements engineer working together with the customer in a supportive on-site liaison in accordance with the agile paradigm. Based on a discussion of requirements-related customer issues, the profile of such an agile requirements engineer as liaison officer will be outlined and the prospected benefits will be stated. | [
"agile requirements engineer",
"agile customer",
"customer liaison"
] | [
"P",
"R",
"R"
] |
-cBb3Hp | A disjunctive cutting plane procedure for general mixed-integer linear programs | In this paper we develop a cutting plane algorithm for solving mixed-integer linear programs with general-integer variables. A novel feature of the algorithm is that it generates inequalities at all gamma -optimal vertices of the LP-relaxation at each iteration. The cutting planes generated in the procedure are found by considering a natural generalization of the 0-1 disjunction used by Balas, Ceria, and Cornuejols in the context of solving binary mixed-integer linear programs [3,4]. | [
"mixed integer programming"
] | [
"M"
] |
-BXGWaU | Pattern synthesis for opportunistic array radar using least square fitness estimation-genetic algorithm method | Pattern synthesis in three-dimensional (3D) opportunistic array radar becomes complex when a multitude of antennas are considered to be randomly distributed in a 3D space. To obtain an optimal pattern, several freedoms must be constrained. A new pattern synthesis approach based on the improved genetic algorithm (GA) using the least square fitness estimation (LSFE) method is proposed. Parameters optimized by this method include antenna locations, stimulus states, and phase weights. The new algorithm demonstrates that the fitness variation tendency of GA can be effectively predicted after several eras by the LSFE method. It is shown that by comparing the variation of LSFE curve slope, the GA operator can be adaptively modified to avoid premature convergence of the algorithm. The validity of the algorithm is verified using computer implementation. 2011 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2011. | [
"pattern synthesis",
"opportunistic array radar",
"genetic algorithm",
"the least square fitness estimation",
"antenna radiation patterns"
] | [
"P",
"P",
"P",
"P",
"M"
] |
4qadwQA | Distributed snapshot isolation: global transactions pay globally, local transactions pay locally | Modern database systems employ Snapshot Isolation to implement concurrency control and isolationbecause it promises superior query performance compared to lock-based alternatives. Furthermore, Snapshot Isolation never blocks readers, which is an important property for modern information systems, which have mixed workloads of heavy OLAP queries and short update transactions. This paper revisits the problem of implementing Snapshot Isolation in a distributed database system and makes three important contributions. First, a complete definition of Distributed Snapshot Isolation is given, thereby extending existing definitions from the literature. Based on this definition, a set of criteria is proposed to efficiently implement Snapshot Isolation in a distributed system. Second, the design space of alternative methods to implement Distributed Snapshot Isolation is presented based on this set of criteria. Third, a new approach to implement Distributed Snapshot Isolation is devised; we refer to this approach as Incremental. The results of comprehensive performance experiments with the TPC-C benchmark show that the Incremental approach significantly outperforms any other known method from the literature. Furthermore, the Incremental approach requires no a priori knowledge of which nodes of a distributed system are involved in executing a transaction. Also, the Incremental approach can execute transactions that involve data from a single node only with the same efficiency as a centralized database system. This way, the Incremental approach takes advantage of sharding or other ways to improve data locality. The cost for synchronizing transactions in a distributed system is only paid by transactions that actually involve data from several nodes. All these properties make the Incremental approach more practical than related methods proposed in the literature. | [
"snapshot isolation",
"concurrency control",
"distributed databases"
] | [
"P",
"P",
"P"
] |
1tm-cy9 | Inducer: a public domain workbench for data mining | This paper describes the facilities available in Inducer, a public domain classification workbench aimed at users who wish to analyse their own datasets using a range of data mining strategies or to conduct experiments with a given technique or combination of techniques across a range of datasets. Inducer has a graphical user interface which is designed to be easy-to-use by beginners, but also includes a range of advanced features for experienced users, including facilities to export the information generated in a form suitable for further processing by other packages. Experiments using the workbench are described. | [
"data mining",
"classification",
"decision rules",
"decision trees",
"rule induction"
] | [
"P",
"P",
"U",
"U",
"U"
] |
2pk5C9r | The effect of extreme low frequency external electric field on the adaptability in the Ermentrout model | Spike-frequency adaptation is a prominent aspect of neuronal dynamics in neural information processing. The external electric field has an effect on the dynamic behavior of the neural system, affecting the generation and conduction of neural information. Based on the Ermentrout model, a modified Ermentrout model under the external electric field is established. From the membrane potential curve and spike frequency curve both changing with time and the onset spike frequency curve changing with the input signal, the changes of the adaptability and the requirements met by the external electric field for the adaptive model are given. The research results suggest a functional role for the external electric field in the excitability of biological nervous system and pathogenesis of several neurological diseases. | [
"external electric field",
"ermentrout model",
"spike-frequency adaptation"
] | [
"P",
"P",
"P"
] |
FWbT&gr | secure deletion from inverted indexes on compliance storage | Recent litigation and intense regulatory focus on secure retention of electronic records have spurred a rush to introduce Write-Once-Read-Many (WORM) storage devices for retaining business records such as electronic mail. A file committed to a WORM device cannot be deleted even by a super-user and hence is secure from attacks originating from company insiders. Secure retention, however, is only a part of a document's lifecycle: It is often crucial to delete documents after its mandatory retention period is over. Since most of the modern WORM devices are built on top of magnetic media, they also support a secure deletion operation by associating expiration time with files. However, for the deleted document to be truly unrecoverable, it must also be deleted from any index structure built over it.This paper studies the problem of securely deleting entries from an inverted index. We first formalize the concept of secure deletion by defining two deletion semantics: strongly and weakly secure deletions. We then analyze some of the deletion schemes that have been proposed in literature and show that they only achieve weakly secure deletion. Furthermore, such schemes have poor space efficiency and/or are inflexibe. We then propose a novel technique for hiding index entries for deleted documents, based on the concept of ambiguating deleted entries. The proposed technique also achieves weakly secure deletion, but is more space efficient and flexible. | [
"secure deletion",
"inverted index",
"regulatory compliance"
] | [
"P",
"P",
"R"
] |
3:JV27t | Comments on Shao-Cao's Unidirectional Proxy Re-Encryption Scheme from PKC 2009 | Proxy re-encryption (PRE), introduced by Blaze, Bleumer and Strauss, allows a semirusted proxy to convert a ciphertext originally intended for Alice into an encryption of the same message intended for Bob. In PKC'09, Shao and Can proposed a unidirectional PRE scheme without pairings, and compared their scheme with Libert-Vergnaud's pairing-based unidirectional PRE scheme from PKC'08. In this paper, we indicated that Shao-Cao's scheme is not secure against chosen-plaintext attack in Libert-Vergnaud's security model. | [
"proxy re-encryption",
"chosen-plaintext attack",
"chosen-ciphertext attack",
"bilinear pairing",
"transformed ciphertext"
] | [
"P",
"P",
"M",
"M",
"M"
] |
3G8mMEY | Kinect cane: an assistive system for the visually impaired based on the concept of object recognition aid | This paper proposes a novel concept to assist visually impaired individuals in recognizing three-dimensional objects in everyday environments. This concept is realized as a portable system that consists of a white cane, a Microsoft Kinect sensor, a numeric keypad, a tactile feedback device, and other components. By the use of the Kinect sensor, the system searches for an object that a visually impaired user instructs the system to find and then returns a searching result to the user via the tactile feedback device. The major advantage of the system is the ability to recognize the objects of various classes, such as chairs and staircases, out of detectable range of white canes. Furthermore, the system is designed to return minimum required information related to the instruction of a user so that the user can obtain necessary information more efficiently. The system is evaluated through two types of experiment: object recognition test and user study. The experimental results indicate that the system is promising as a means of helping visually impaired users recognize objects. | [
"assistive system",
"object recognition aid",
"microsoft kinect",
"visually impaired user",
"user study",
"depth data"
] | [
"P",
"P",
"P",
"P",
"P",
"U"
] |
-QPb9P3 | Transportation fuels and policy for Singapore: an AHP planning approach | A comprehensive study of alternative fuels for land transportation in Singapore is carried out. A multiple attribute analysis is used to identify a number of fuel options for possible future use. An AHP analysis is performed to evaluate four possible plans or scenarios. The preferred plan, however, deviates from the most likely future scenario and an iterative forward and backward AHP planning process is used to identify and evaluate a set of policies which may be used to reduce the gap. (C) 2000 Elsevier Science Ltd. All rights reserved. | [
"transportation fuels",
"ahp",
"mcdm"
] | [
"P",
"P",
"U"
] |
2n6JugP | a standby-sparing technique with low energy-overhead for fault-tolerant hard real-time systems | Time redundancy (rollback-recovery) and hardware redundancy are commonly used in real-time systems to achieve fault tolerance. From an energy consumption point of view, time redundancy is generally more preferable than hardware redundancy. However, hard real-time systems often use hardware redundancy to meet high reliability requirements of safety-critical applications. In this paper we propose a hardware-redundancy technique with low energy-overhead for hard real-time systems. The proposed technique is based on standby-sparing, where the system is composed of a primary unit and a spare. Through analytical models, we have developed an online energy-management method which uses a slack reclamation scheme to reduce the energy consumption of both the primary and spare units. In this method, dynamic voltage scaling (DVS) is used for the primary unit and dynamic power management (DPM) is used for the spare. We conducted several experiments to compare the proposed system with a fault-tolerant real-time system which uses time redundancy for fault tolerance and DVS with slack reclamation for low energy consumption. The results show that for relaxed time constraints, the proposed system provides up to 24\% energy saving as compared to the time-redundancy system. For tight deadlines when the time-redundancy system can tolerate no faults, the proposed system preserves its fault-tolerance but with about 32\% more energy consumption. | [
"hard real-time systems",
"reliability",
"energy minimization"
] | [
"P",
"P",
"M"
] |
s3gudpj | Longest increasing subsequences in windows based on canonical antichain partition | Given a sequence pi(1)pi(2) center dot center dot center dot pi(n), a longest increasing subsequence (LIS) in a window pi = pi(l)pi(l)+1 center dot center dot center dot pi r is a longest subsequence sigma = pi i(1)pi i(2) center dot center dot center dot pi(iT) such that l vertical bar 0 , pi vertical bar i < w}. By maintaining a canonical antichain partition in windows, wer present an optimal output-sensitive algorithm to solve this problem in O (OUTPUT) time, where OUTPUT is the sum of the lengths of the n + w - 1 LISs in those windows of S-FIX. In addition, we propose a more generalized problem called LISSET problem, which is to find a LIS for every window in a set S-VAR containing variable-size windows. By applying our algorithm, we provide an efficient solution for the LISSET problem to output a LIS (or all the LISs) in every window which is better than the straightforward generalization of classical LIS algorithms. An upper bound of our algorithm on the LISSET problem is discussed. (c) 2007 Elsevier B. V. All rights reserved. | [
"longest increasing subsequences",
"canonical antichain partition",
"data streaming model"
] | [
"P",
"P",
"U"
] |
23CM1uR | Assessing modern ground survey methods and airborne laser scanning for digital terrain modelling: A case study from the Lake District, England | This paper compares the applicability of three ground survey methods for modelling terrain: one man electronic tachymetry (TPS), real time kinematic GPS (GPS), and terrestrial laser scanning (TLS). Vertical accuracy of digital terrain models (DTMs) derived from GPS, TLS and airborne laser scanning (ALS) data is assessed. Point elevations acquired by the four methods represent two sections of a mountainous area in Cumbria, England. They were chosen so that the presence of non-terrain features is constrained to the smallest amount. The vertical accuracy of the DTMs was addressed by subtracting each DTM from TPS point elevations. The error was assessed using exploratory measures including statistics, histograms, and normal probability plots. The results showed that the internal measurement accuracy of TPS, GPS, and TLS was below a centimetre. TPS and GPS can be considered equally applicable alternatives for sampling the terrain in areas accessible on foot. The highest DTM vertical accuracy was achieved with GPS data, both on sloped terrain (RMSE 0.16m) and flat terrain (RMSE 0.02m). TLS surveying was the most efficient overall but veracity of terrain representation was subject to dense vegetation cover. Therefore, the DTM accuracy was the lowest for the sloped area with dense bracken (RMSE 0.52m) although it was the second highest on the flat unobscured terrain (RMSE 0.07m). ALS data represented the sloped terrain more realistically (RMSE 0.23m) than the TLS. However, due to a systematic bias identified on the flat terrain the DTM accuracy was the lowest (RMSE 0.29m) which was above the level stated by the data provider. Error distribution models were more closely approximated by normal distribution defined using median and normalized median absolute deviation which supports the use of the robust measures in DEM error modelling and its propagation. | [
"laser scanning",
"tachymetry",
"gps",
"vertical accuracy",
"dem/dtm",
"great langdale"
] | [
"P",
"P",
"P",
"P",
"M",
"U"
] |
134yvin | Realistic Long Term Evolution Performance for Massive HeNB Residential Deployments | Nowadays, around 80% of the mobile data traffic is generated indoors, and, therefore, in-building solutions are gaining interest among mobile operators, to improve users quality of experience and optimize the use of network resources. In this context, with IEEE 802.11 and 3G/HSPA femtocells competing as in-building solutions long term evolution has appeared to enable operators to meet growing data-rate demands, and it is expected to have a key role in future indoor deployments. In this paper, a complete analysis of the performance of in-building self-deployment LTE solutions is carried out, by means of system-level network simulations in multiple typical indoor scenarios. The variability of the performance due to aspects such as the arbitrary HeNB location, the penetration rate of the service, the neighboring effects of HeNB nodes, the frequency used and the interaction among LTE macrocells and femtocells are thoroughly studied and discussed. Besides that, mechanisms proposed in 3GPP Release 11 to mitigate performance degradation in high density HeNB deployments are presented and analyzed. With regard to these mechanisms, different configuration access modes control schemes to automatically select transmitted power and Intercell Interference Coordination Techniques (ICIC) have been considered, and their effect on the performance of HeNB in-building deployments have been assessed. The results obtained provide network designers and mobile operators with valuable information about the expected number of indoor users which can be served using HeNB networks and its variability under different network conditions. In addition to this, results presented are useful to define policies to select when mechanisms to mitigate performance degradation are required to be activated, depending on the type of deployment scenario, penetration rates, HeNB loads or operator prioritization requirements, and both select the ranges of the configurable parameters of these mechanisms, and HeNB default settings. | [
"henb",
"indoor radio communications",
"long-term evolution",
"optimization of 4g wireless networks"
] | [
"P",
"M",
"M",
"M"
] |
4G3:U9U | On the containment and equivalence problems for two-way transducers | We look at some classes of two-way transducers with auxiliary memory and investigate their containment and equivalence problems. We believe that our results are the strongest known to date concerning two-way transducers. (C) 2011 Elsevier B.V. All rights reserved. | [
"equivalence problem",
"two-way transducer",
"containment problem"
] | [
"P",
"P",
"R"
] |
henpk6C | Two-stage intonation modeling using feedforward neural networks for syllable based text-to-speech synthesis | This paper proposes a two-stage feedforward neural network (FFNN) based approach for modeling fundamental frequency (F-0) values of a sequence of syllables. In this study, (i) linguistic constraints represented by positional, contextual and phonological features, (ii) production constraints represented by articulatory features and (iii) linguistic relevance tilt parameters are proposed for predicting intonation patterns. In the first stage, tilt parameters are predicted using linguistic and production constraints. In the second stage, F-0 values of the syllables are predicted using the tilt parameters predicted from the first stage, and basic linguistic and production constraints. The prediction performance of the neural network models is evaluated using objective measures such as average prediction error (mu), standard deviation (sigma) and linear correlation coefficient (gamma(X,Y)). The prediction accuracy of the proposed two-stage FFNN model is compared with other statistical models such as Classification and Regression Tree (CART) and Linear Regression (LR) models. The prediction accuracy of the intonation models is also analyzed by conducting listening tests to evaluate the quality of synthesized speech obtained after incorporation of intonation models into the baseline system. From the evaluation, it is observed that prediction accuracy is better for two-stage FFNN models, compared to the other models. (C) 2013 Elsevier Ltd. All rights reserved. | [
"intonation models",
"feedforward neural networks",
"text-to-speech synthesis",
"linguistic constraints",
"positional",
"contextual",
"phonological",
"production constraints",
"articulatory",
"tilt",
"prediction accuracy",
"f-0 of syllable"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"R"
] |
-MtK:Yg | A new construction of optimal p(2)-ary low correlation zone sequences using unified sequences | In this paper, given an integer e and n such that e vertical bar n, and a prime p, we propose a method of constructing optimal p(2)-ary low correlation zone (LCZ) sequence set with parameters (p(n) - 1, p(e) - 1, (p(n) - 1)/(p(e) - 1), 1) from a p-ary sequence of the same length with ideal autocorrelation. The resulting p(2)-ary LCZ sequence set can be viewed as the generalization of the optimal quaternary LCZ sequence set by Kim, Jang, No, and Chung in respect of the alphabet size. This generalization becomes possible due to a completely new proof comprising any prime p. Under this proof, the quaternary case can be considered as a specific example for p = 2. | [
"unified sequences",
"low correlation zone (lcz) sequences",
"p(2)-ary sequences",
"quasi-synchronous code division multiple access (qs-cdma)"
] | [
"P",
"P",
"R",
"M"
] |
2psL6E: | A simple and accurate solution for calculating stresses in conical shells | In this paper, new variable transformation formulas are introduced to solve the basic governing differential equations for conical shells. By performing magnitude order analysis and neglecting the quantities with h/R magnitude order, the basic governing differential equations for conical shells are transformed into a second-order differential equation with complex constant coefficients. By solving this second-order differential equation, a simple and accurate solution for conical shells is derived. The present solution is simpler than the exact solution because it does not use Bessel's functions, and also more accurate than the equivalent cylinder solution. Numerical examples are given to illustrate this conclusion. The simple and accurate solution provides a quick means for analyzing stresses in conical shells. | [
"simple and accurate solution",
"conical shell",
"exact solution",
"stress analysis",
"approximate solution"
] | [
"P",
"P",
"P",
"R",
"M"
] |
-P&TU3d | Optimization design of corrugated beam guardrail based on RBF-MQ surrogate model and collision safety consideration | Multiobjective optimization design was made for two corrugated beam guardrails. RBF surrogate model based on collision safety consideration was used for regression analysis. Comparative studies were made between W-beam guardrail and Thrie-beam guardrail. | [
"corrugated beam guardrail",
"collision",
"multiobjective optimization",
"crashworthiness",
"radial basis function",
"highway"
] | [
"P",
"P",
"P",
"U",
"U",
"U"
] |
57AVLGN | Doctor Faustus in the twenty-first century | In the medieval legend, Doctor Faustus strikes a dark deal with the devil; he obtains vast powers for a limited time in exchange for a priceless possession, his eternal soul. The cautionary tale, perhaps more than ever, provides a provocative lens for examining humankinds condition, notably its indefatigable faith in knowledge and technology and its predilection toward misusing both. A variety of important questions are raised in this meditation including What is the nature of knowledge today and how does it differ from knowledge in prior times? What is its relation to technology and power? What paths are we heading along and which alternative ones are being avoided? Not insignificantly, we also raise the issue of civic ignorance, including that which is intentionally cultivated and that which is simply a lack of knowledge. We also consider the identity of Doctor Faustus in the twenty-first century and in a more material world like ours, what is the soul that he would lose in the bargain, and what damage might be done to Faustus and to innocent bystanders. Finally since people dont always live up to the terms of agreements they make, what, if anything, could Faustus do to wriggle out of the bargain, to avoid the loss of his all-important soul. Our response is not to disavow knowledge (as the implicit lesson of the original myth might suggest) but to shift to another approach to knowledge that is more collective and more responsive to actual needs of our era. This approach which we call civic intelligence is considered as a way to avoid the possible catastrophes that the Faustian bargain weve seemingly struck is likely to bring. | [
"civic ignorance",
"civic intelligence",
"sociology of knowledge",
"technological critique",
"social construction of knowledge"
] | [
"P",
"P",
"M",
"M",
"M"
] |
11pfd4Q | Cloud Computing for Smart Grid applications | A reliable and efficient communications system is required for the robust, affordable and secure supply of power through Smart Grids (SG). Computational requirements for Smart Grid applications can be met by utilizing the Cloud Computing (CC) model. Flexible resources and services shared in network, parallel processing and omnipresent access are some features of Cloud Computing that are desirable for Smart Grid applications. Eventhough the Cloud Computing model is considered efficient for Smart Grids, it has some constraints such as security and reliability. In this paper, the Smart Grid architecture and its applications are focused on first. The Cloud Computing architecture is explained thoroughly. Then, Cloud Computing for Smart Grid applications are also introduced in terms of efficiency, security and usability. Cloud platforms technical and security issues are analyzed. Finally, cloud service based existing Smart Grid projects and open research issues are presented. | [
"cloud computing",
"smart grid",
"smart grid and cloud computing architecture",
"cloud computing based smart grid applications and projects"
] | [
"P",
"P",
"R",
"R"
] |
4graMvf | structured data interfacing for software systems | Structured programming has drawn considerable attention. Discussions have focused on language and data manipulations, however, and little has been done in the area of structuring data interfacing schemes for transferring data from one routine to another within a software system. Without proper rules governing the structuring of such schemes, it is difficult to trace the data interfacing flows. A structured data interfacing method, including selections of external routine usages, common file allocations, data naming techniques, and scheme construction criteria for solving the data traceability problem is suggested. As a result, the sources, destinations and types of the data in a data transferring flow are explicity traceable with the names of the data. The application of this method to a FORTRAN system is presented. Similar applications to systems in other languages will help to further the state of the art of software system development. | [
"structured data interfacing",
"software systems",
"external routine usages",
"common files",
"fortran systems",
"standard names",
"software reliability",
"common file specification"
] | [
"P",
"P",
"P",
"P",
"P",
"M",
"M",
"M"
] |
3xsCnbw | The election problem in asynchronous distributed systems with bounded faulty processes | Determining the "weakest" failure detectors is a central topic in solving many agreement problems such as Consensus, Non-Blocking Atomic Commit and Election in asynchronous distributed systems. So far, this has been studied extensively for several such fundamental problems. It is stated that Perfect Failure Detector P is the weakest failure detector to solve the Election problem with any number of faulty processes. In this paper, we introduce Modal failure detector M and show that to solve Election, M is the weakest failure detector to solve election when the number of faulty processes is less than [n/2]. We also show that it is strictly weaker than P. | [
"failure detectors",
"consensus",
"distributed algorithm",
"leader election"
] | [
"P",
"P",
"M",
"M"
] |
3pxfcF2 | COMPUTATIONALLY TESTABLE CONDITIONS FOR EXCITABILITY AND TRANSPARENCY OF A CLASS OF TIME-DELAY SYSTEMS WITH POINT DELAYS | This article is concerned with the excitability of positive linear time-invariant systems subject to internal point delays. It is proved that the excitability independent of delay is guaranteed if an auxiliary delay-free system is excitable. Necessary and sufficient conditions for excitability and transparency are formulated in terms of the parameterization of the dynamics and control matrices and, equivalently, in terms of strict positivity of a matrix of an associate system obtained from the influence graph of the original system. Such conditions are testable through simple algebraic tests involving moderate computational effort. | [
"transparency",
"time-delay systems",
"point delays",
"excitable systems",
"positive systems"
] | [
"P",
"P",
"P",
"R",
"R"
] |
56yW9MM | Nonlinear modeling of a SOFC stack based on ANFIS identification | An adaptive neural-fuzzy inference system (ANFIS) model is developed to study different flows effect on the performance of solid oxide fuel cell (SOFC). During the process of modeling, a hybrid learning algorithm combining backpropagation (BP) and least squares estimate (LSE) is adopted to identify linear and nonlinear parameters in the ANFIS. The validity and accuracy of modeling are tested by simulations and the simulation results reveal that the obtained ANFIS model can efficiently approximate the dynamic behavior of the SOFC stack. Thus it is feasible to establish the model of SOFC stack by ANFIS. | [
"modeling",
"adaptive neural-fuzzy inference system (anfis)",
"solid oxide fuel cells (sofcs)"
] | [
"P",
"P",
"P"
] |
fi1Stho | Smart grid and smart building inter-operation using agent-based particle swarm optimization | Future power systems require a change from a vertical to a horizontal structure, in which the customer plays a central role. As buildings represent a substantial aggregation of energy consumption, the intertwined operation of the future power grid and the built environment is crucial to achieve energy efficiency and sustainable goals. This transition towards a so-called smart grid (SG) requires advanced building energy management systems (BEMS) to cope with the highly complex interaction between two environments. This paper proposes an agent-based approach to optimize the inter-operation of the SGBEMS framework. Furthermore a computational intelligence technique, i.e. Particle Swarm Optimization (PSO), is used to maximize both comfort and energy efficiency. Numerical results from an integrated simulation show that the operation of the building can be dynamically changed to support the voltage control of the local power grid, without jeopardizing the building main function, i.e. comfort provision. | [
"particle swarm optimization",
"energy management",
"comfort management",
"demand side management",
"building automation",
"multi-agent systems"
] | [
"P",
"P",
"R",
"M",
"M",
"M"
] |
-vZ3g3R | On hybrid models of quantum finite automata | This paper describes several existing hybrid models of QFA in a uniform way. We clarify the relationship between hybrid QFA and some other models. Some results concerning the language recognition power and the equivalence problem of hybrid QFA follow directly in this paper. | [
"quantum finite automata",
"hybrid model of qfa",
"quantum computing",
"automata theory"
] | [
"P",
"P",
"M",
"M"
] |
4zzUoV2 | Time-varying signal analysis to detect high-altitude periodic breathing in climbers ascending to extreme altitude | This work investigates the performance of cardiorespiratory analysis detecting periodic breathing (PB) in chest wall recordings in mountaineers climbing to extreme altitude. The breathing patterns of 34 mountaineers were monitored unobtrusively by inductance plethysmography, ECG and pulse oximetry using a portable recorder during climbs at altitudes between 4497 and 7546m on Mt. Muztagh Ata. The minute ventilation (VE) and heart rate (HR) signals were studied, to identify visually scored PB, applying time-varying spectral, coherence and entropy analysis. In 411 climbing periods, 30120min in duration, high values of mean power (MPVE) and slope (MSlopeVE) of the modulation frequency band of VE, accurately identified PB, with an area under the ROC curve of 88 and 89%, respectively. Prolonged stay at altitude was associated with an increase in PB. During PB episodes, higher peak power of ventilatory (MPVE) and cardiac (MP LF HR ) oscillations and cardiorespiratory coherence (MP LF Coher ), but reduced ventilation entropy (SampEnVE), was observed. Therefore, the characterization of cardiorespiratory dynamics by the analysis of VE and HR signals accurately identifies PB and effects of altitude acclimatization, providing promising tools for investigating physiologic effects of environmental exposures and diseases. | [
"high-altitude periodic breathing",
"acclimatization",
"cardiorespiratory characterization",
"time-varying spectral analysis",
"hypoxia"
] | [
"P",
"P",
"R",
"R",
"U"
] |
-neLWtN | On-line predictions of the aspen fibre and birch bark content in unbleached hardwood pulp, using NIR spectroscopy and multivariate data analysis | An on-line fibre-based near-infrared (NIR) spectrometric analyser was adapted for on-site process analysis at an integrated paperboard mill. The analyser uses multivariate techniques for the quantitative predication of the aspen fibre (aspen) and the birch bark contents of sheets of unbleached hardwood pulp. The NIR analyser is a prototype constructed from standard NIR components. The spectroscopic data was processed by using principal component analysis (PCA) and partial least square (PLS) regression. Three sample sets were collected from three experimental designs, each composed of known pulp contents of birch, aspen and birch bark. Sets I and 2 were used for model calibration and set 3 was used to validate the models. The PLS model that produced the best predictions gave an error of prediction (RMSEP) of 13% for aspen and less than 2% for birch bark. Eight components resulted in an R(2)X of 99.3%, R(2)Y of 99.6%. and Q(2) of 95.3%. For additional validation of aspen, three unbleached hardwood samples from the mill's production were calculated to lie between -7% and +6%, regarding to the PIS model. When vessel cells were counted under a light microscope a value for the aspen content of 4.7% was obtained. The predictive models evaluated were suitable for quality assessments rather than quantitative determination. (C) 2010 Elsevier B.V. All rights reserved. | [
"multivariate data",
"near-infrared",
"aspen predictions",
"on-line analyser"
] | [
"P",
"P",
"R",
"R"
] |
446g2b6 | A Generalized Tamper Localization Approach for Reversible Watermarking Algorithms | In general reversible watermarking algorithms, the convention is to reject the entire cover image at the receiver end if it fails authentication, since there is no way to detect the exact locations of tampering. This feature may be exploited by an adversary to bring about a form of DoS attack. Here we provide a solution to this problem in form of a tamper localization mechanism for reversible watermarking algorithms, which allows selective rejection of distorted cover image regions in case of authentication failure, thus avoiding rejection of the complete image. Additionally it minimizes the bandwidth requirement of the communication channel. | [
"tamper localization",
"reversible watermarking",
"authentication",
"security",
"performance",
"digital image forensics"
] | [
"P",
"P",
"P",
"U",
"U",
"M"
] |
4Xp6mx& | Hierarchical Graph Maps | Graphs and maps are powerful abstractions. Their combination, Hierarchical Graph Maps, provide effective tools to process a graph that is too large to fit on the screen. They provide hierarchical visual indices (i.e. maps) that guide navigation and visualization. Hierarchical graph maps deal in a unified manner with both the screen and I/O bottlenecks. This line of thinking adheres to the Visual Information Seeking Mantra: Overview first, zoom and filter, then details on demand (Information Visualization: dynamic queries, star field displays and lifelines, in www.cr.umd.edu, 1997). We highlight the main tasks behind the computation of Graph Maps and provide several examples. The techniques have been used experimentally in the navigation of graphs defined on vertex sets ranging from 100 to 250 million vertices. (C) 2004 Elsevier Ltd. All rights reserved. | [
"graphs",
"visualization",
"massive data sets",
"hierarchy trees"
] | [
"P",
"P",
"M",
"U"
] |
1Qp7f4C | On self-dual ternary codes and their coordinate ordering | The paper studies self-orthogonal codes over GF(3). The state complexities of such codes of lengths ?20 with efficient coordinate ordering are found. | [
"coordinate ordering",
"ternary linear codes",
"graphs"
] | [
"P",
"M",
"U"
] |
4yNbf75 | Algorithm 835: MultRoot - A Matlab package for computing polynomial roots and multiplicities | MULTROOT is a collection of Matlab modules for accurate computation of polynomial roots, especially roots with non-trivial multiplicities. As a blackbox-type software, MULTROOT requires the polynomial coefficients as the only input, and outputs the computed roots, multiplicities, backward error, estimated forward error, and the structure-preserving condition number. The most significant features of MULTROOT are the multiplicity identification capability and high accuracy on multiple roots without using multiprecision arithmetic, even if the polynomial coefficients are inexact. A comprehensive test suite of polynomials that are collected from the literature is included for numerical experiments and performance comparison. | [
"algorithms",
"polynomial",
"multiplicity identification",
"multiple root",
"experimentation",
"documentation",
"multiple zero",
"root-finding"
] | [
"P",
"P",
"P",
"P",
"U",
"U",
"M",
"U"
] |
4qCPVB: | Information communication technology spending in (2008-) economic crisis | Purpose - The purpose of this paper is to analyze the impact of the recent (2008-) economic crisis on information communication technology (ICT) spending. The empirical findings are discussed within a broader theoretical framework of technological trends/diffusion and economic cycles. Design/methodology/approach - First, the paper introduces the innovation diffusion theory and theories of economic cycles. Next, it presents the analyses of the data from official statistics, international agencies and research companies. Finally, it summarizes the empirical findings within theoretical contexts. Findings - In general, crises always reduce spending and therefore also ICT spending. However, focusing on the recent crisis, it affected the ICT market selectively and also much less than other sectors. In addition, the empirical findings indicate that after decades of fast ICT expansion (1971-2000) we are now in the period of slower sectoral growth, which is in line with theories of super cycles, although, the authors also propose alternative explanations. Research limitations/implications - The impact of the economic crisis on the ICT market strongly depends on countries' economic situation and development stage. Nonetheless some ICT segments that allow cost savings, greater productivity and efficiency, have been strengthened during the latest (2008-) economic crisis, which also pinpoints the directions for further transformation of ICT. Practical implications - Despite usually reduced budgets during the crisis, managers should put increased attention to new/alternative ICT solutions (e.g. virtualization, outsourcing, cloud computing) and lowered prices of ICT products/services to increase competitiveness. The crisis can be thus an opportunity to re-examine the contribution of ICT to productivity, workflow efficiency and introduce new methods for better exploitation of ICT capital. Social implications - The aim of this paper is to contribute to the understanding about the transformation of ICT in economic crises. It also demonstrates that recent crises caused another microwave within the last super cycle. Originality/value - The paper provides empirical insight into the link between economic situation and ICT spending in past 15 years, with special attention to the changes observed during the latest (2008-) crisis. The analysis is also put into a broader theoretical framework, where it proposes alternative explanations supported by empirical evidences. | [
"communication technologies",
"information technology",
"budgetary control",
"economic depression"
] | [
"P",
"R",
"U",
"M"
] |
21-5j5Q | A Delay Model of Multiple-Valued Logic Circuits Consisting of Min, Max, and Literal Operations | Delay models for binary logic circuits have been proposed and clarified their mathematical properties. Kleene's ternary logic is one of the simplest delay models to express transient behavior of binary logic circuits. Goto first applied Kleene's ternary logic to hazard detection of binary logic circuits in 1948. Besides Kleene's ternary logic, there are many delay models of binary logic circuits, Lewis's 5-valued logic etc. On the other hand, multiple-valued logic circuits recently play an important role for realizing digital circuits. This is because, for example, they can reduce the size of a chip dramatically. Though multiple-valued logic circuits become more important, there are few discussions on delay models of multiple-valued logic circuits. Then, in this paper, we introduce a delay model of multiple-valued logic circuits, which are constructed by Min, Max, and Literal operations. We then show some of the mathematical properties of our delay model. | [
"delay model",
"multiple-valued logic",
"multiple-valued logic circuits",
"hazard detection"
] | [
"P",
"P",
"P",
"P"
] |
q3SmZc& | The Integration of the Satellite Communications with the Terrestrial Mobile Network (UMTS) | Nowadays, the knowledge about the new technologies is indispensible for companies that depend on them to be developed as well as to keep ahead of the competitors. The main goal of this paper is to present an availability study that allows the integration of the Satellite Communications with the Terrestrial Mobile Network (UMTS), providing global roaming to the T-UMTS (Terrestrial Universal Mobile Telecommunications System) users. The accomplished simulation shows the availability of fragmenting the PDUs (Protocol Data Units) to improve the throughput in the satellite link. | [
"satellite",
"mobile network",
"umts",
"roaming",
"t-umts",
"pdu",
"throughput"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
8k4RHf: | A peer to peer (P2P) architecture for dynamic workflow management | This paper presents the architecture of a novel Peer to Peer (P2P) workflow management system. The proposed P2P architecture is based on concepts such as a Web Workflow Peers Directory (WWPD) and Web Workflow Peer (WWP). The WWPD is an active directory system that maintains a list of all peers (WWPs) that are available to participate in Web workflow processes. Similar to P2P systems such as Napster and Gnutella, it allows peers to register with the system and offer their services and resources to other peers over the Internet. Furthermore, the architecture supports a novel notification mechanism to facilitate distributed workflow administration and management. Employing P2P principles can potentially simplify the workflow process and provide a more open, scalable process model that is shared by all workflow participants. This would enable for example a WWP to connect directly to another without going through an intermediary, currently represented by the workflow process management server. P2P workflow becomes more efficient as the number of peers performing the same role increases. Available peers can be discovered dynamically from the WWPD. The few currently existing P2P based workflow systems fail to utilise state of the art Web technologies such as Web Services. In contrast, using the approach described here it is possible to expose interoperable workflow processes over the Internet as services. A medical consultation case study is used to demonstrate the proposed system. | [
"peer to peer",
"workflow management",
"web services",
"business process management",
"xml",
"simple object access protocol",
"bpel4ws"
] | [
"P",
"P",
"P",
"M",
"U",
"U",
"U"
] |
4zP5rGf | Dynamic intention structures I: a theory of intention representation | This article introduces a new theory of intention representation which is based on a structure called a Dynamic Intention Structure (DIS). The theory of DISs was motivated by the problem of how to properly represent incompletely specified intentions and their evolution. Since the plans and intentions of collaborating agents are most often elaborated incrementally and jointly, elaboration processes naturally involve agreements among agents on the identity of appropriate agents, objects and properties that figure into their joint plans. The paper builds on ideas from dynamic logic to present a solution to the representation and evolution of agent intentions involving reference to incompletely specified and, possibly, mutually dependent intentions, as well as the objects referenced within those intentions. It provides a first order semantics for the resulting logic. A companion paper extends further the logical form of DISs and explores the problem of logical consequence and intention revision. | [
"intentions",
"representation",
"collaborative planning"
] | [
"P",
"P",
"R"
] |
1P3vb8: | Proper real reparametrization of rational ruled surfaces | Let K subset of R be a computable field. We present an algorithm to decide whether a proper rational parametrization of a ruled surface, with coefficients in K((i), can be properly reparametrized over a real (i.e. embedded in R) finite field extension of K. Moreover, in the affirmative case, the algorithm provides a proper parametrization with coefficients in a real extension of K of degree at most 2. (C) 2010 Elsevier B.V. All rights reserved. | [
"ruled surfaces",
"real and complex surfaces"
] | [
"P",
"M"
] |
1kG8R2d | scalable fpga implementation for mixed-norm lms-lmf adaptive filters | This work proposes a scalable architecture for implementing a mixed-norm LMS-LMF adaptive algorithm using a 16-bit fixed-point arithmetic representation. The hardware scalability allows flexibility in the choice of selecting the order of the filter without redesigning the hardware. The filter also allows flexibility in using application specific sampling frequencies. The hardware architecture was implemented using a Virtex-4 FPGA ML402 board. A 2 nd order prototype was tested through both hardware and software simulations. According to the synthesis and the simulation results obtained, the adaptive filter coefficients would converge in less than 17?s with accuracy greater than 95\%. | [
"fpga",
"lms-lmf",
"adaptive filters",
"scalable architecture"
] | [
"P",
"P",
"P",
"P"
] |
4HtVUwQ | Interval additive generators of interval t-norms and interval t-conorms | The aim of this paper is to introduce the concepts of interval additive generators of interval t-norms and interval t-conorms, as interval representations of additive generators of t-norms and t-conorms, respectively, considering both the correctness and the optimality criteria. The formalization of interval fuzzy connectives in terms of their interval additive generators provides a more systematic methodology for the selection of interval t-norms and interval t-conorms in the various applications of fuzzy systems. We also prove that interval additive generators satisfy the main properties of additive generators discussed in the literature. | [
"interval additive generator",
"interval fuzzy logic",
"interval-valued fuzzy connectives",
"interval triangular norm",
"interval triangular conorm"
] | [
"P",
"M",
"M",
"M",
"M"
] |
1i-d2xx | A multi-agent system for web-based risk management in small and medium business | Business Intelligence has gained relevance during the last years to improve business decision making. However, there is still a growing need of developing innovative tools that can help small to medium sized enterprises to predict risky situations and manage inefficient activities. This article present a multi-agent system especially created to detect risky situations and provide recommendations to the internal auditors of SMEs. The core of the multi-agent system is a type of agent with advanced capacities for reasoning to make predictions based on previous experiences. This agent type is used to implement a evaluator agent specialized in detect risky situations and an advisor agent aimed at providing decision support facilities. Both agents incorporate innovative techniques in the stages of the CBR system. An initial prototype was developed and the results obtained related to small and medium enterprises in a real scenario are presented. | [
"business intelligence",
"cbr",
"hybrid neural intelligent system",
"mas",
"business risk prediction"
] | [
"P",
"P",
"M",
"U",
"R"
] |
14z5448 | OOPUS-DESIGNER - User-friendly master data maintenance through intuitive and interactive visualization | Valid and consistent master data are pre-requisite for efficient working Enterprise Resource Planning (ERP) and Production Planning and Control (PPC) systems. Unfortunately users are often confused by a large number of forms or transactions in these systems. Confusing interfaces lead to faulty master data. In this paper we introduce a tool that provides intuitive and interactive visualization for the master data administration of a PPC system. | [
"master data",
"visualization",
"erp",
"ppc",
"human computer interaction"
] | [
"P",
"P",
"P",
"P",
"M"
] |
sQ8N356 | Graph image language techniques supporting radiological, hand image interpretations | This paper will present a new approach to the interpretation of complex X-ray images, an approach taking advantage of artificial intelligence and soft computing. The tasks of the analysis and the interpretation of the cognitive meaning of selected medical diagnostic images are made possible owing to the application of graph image languages based on EDG-type grammars. The subject of this paper will be to present the wide range of semantic interpretation possibilities of hand and wrist radiogrammes using language formalisms. The objective of the analysis conducted is to detect pathological lesions of the in-born or of the acquired character as well as bone morphology lesions and bone dislocation as seen in bone defects. The application of graph formalisms, based on EDG languages, allows us to make an effective classification of its meaning with polynomial complexity. | [
"artificial intelligence",
"medical image understanding",
"intelligent information systems",
"syntactic pattern recognition",
"hand disease diagnostics"
] | [
"P",
"M",
"M",
"U",
"M"
] |
5-9XYJy | Optimization with the Hopfield network based on correlated noises: Experimental approach ? | This paper presents two simple optimization techniques based on combining the Langevin Equation with the Hopfield Model. Proposed models referred as stochastic model (SM) and pulsed noise model (PNM) can be regarded as straightforward stochastic extensions of the Hopfield optimization network. Both models follow the idea of stochastic neural network (Levy and Adams, IEEE Conference on Neural Networks, vol. III, San Diego, USA, 1987, pp. 681689) and diffusion machine (Wong, Algorithmica 6 (1991) 466478). They differ form the referred approaches by the nature of noises and the way of their injection. Optimization with stochastic model, unlike in the previous works, in which ?-correlated Gaussian noises were considered, is based on Gaussian noises with positive autocorrelation times. This is a reasonable assumption from a hardware implementation point of view. In the other model pulsed noise model, Gaussian noises are injected to the system only at certain time instances, as opposed to continuously maintained ?-correlated noises used in the previous related works. In both models (SM and PNM) intensities of injected noises are independent of neurons potentials. Moreover, instead of impractically long inverse logarithmic cooling schedules, the linear cooling is tested. With the above strong simplifications neither SM nor PNM is expected to rigorously maintain thermal equilibrium (TE). However, numerical tests based on the canonical GibbsBoltzmann distribution show, that differences between rigorous and estimated values of TE parameters are relatively low (within a few percent). In this sense both models are said to perform quasithermal equilibrium. Optimization performance and quasithermal equilibrium properties of both models are presented based on the travelling salesman problem (TSP). | [
"hopfield model",
"gaussian noise",
"gibbsboltzmann distribution",
"travelling salesman problem",
"stochastic optimization",
"simulated annealing"
] | [
"P",
"P",
"P",
"P",
"R",
"U"
] |
3ZUHpnu | Framework for link reliability in inter-working multi-hop wireless networks | With the increase in deployment of multi-hop wireless networks and the desire for seamless internet access through ubiquitous connectivity, the inter-working of heterogeneous multi-hop wireless networks will become prominent in the near future. To complement the quest for ubiquitous service access, multi-mode mobile terminals are now in existence. Inter-working heterogeneous multi-hop wireless networks can provide seamless connectivity for such multi-mode nodes but introduces a number of challenges due to its dynamic network topology. One of the challenges in ensuring seamless access to service through these terminals in an inter-working environment is the selection of reliable wireless point-to-point links by the multi-hop nodes. A wireless link is said to be reliable if its radio attribute satisfies the minimum requirements for successful communication. Successful communication is specified by metrics such as signal to interference and noise ratio (SINR), probability of bit error etc. However, the multi-hop wireless networks being inter-worked may operate with different link layer protocols. Therefore, how can the reliability of a wireless link be estimated irrespective of the link level technologies implemented in the networks being inter-worked so that optimal paths can be used for multi-hopping between nodes? In this paper, a generic framework which can estimate the reliability of a link in inter-working multi-hop wireless network is presented. The framework uses the relationship between inter-node interference, SINR and the probability of bit error to determine the reliability of a wireless link between two nodes. There is a threshold for the probability of bit error on a link for the link to be termed reliable. Using parameters such as the SINR threshold, nodes' transmission power, link distance and interfering node density, the framework can evaluate the reliability of a link in an inter-working multi-hop network. (C) 2010 Elsevier Ltd. All rights reserved. | [
"reliability",
"inter-working",
"multi-hop",
"wireless networks",
"probability of bit error"
] | [
"P",
"P",
"P",
"P",
"P"
] |
-cJNSZv | Learning by discovering concept hierarchies | We present a new machine learning method that, given a set of training examples, induces a definition of the target concept in terms of a hierarchy of intermediate concepts and their definitions. This effectively decomposes the problem into smaller, less complex problems. The method is inspired by the Boolean function decomposition approach to the design of switching circuits. To cope with high time complexity of finding an optimal decomposition, we propose a suboptimal heuristic algorithm. The method, implemented in program HINT(Hierarchy INduction Tool), is experimentally evaluated using a set of artificial and real-world learning problems. In particular, the evaluation addresses the generalization property of decomposition and its capability to discover meaningful hierarchies. The experiments show that HINT performs well in both respects. (C) 1999 Elsevier Science B.V. All rights reserved. | [
"concept hierarchies",
"machine learning",
"function decomposition",
"generalization",
"concept discovery",
"constructive induction"
] | [
"P",
"P",
"P",
"P",
"M",
"M"
] |
2ZtJoeR | Neural network based framework for fault diagnosis in batch chemical plants | In this work, an artificial neural network (ANN) based framework for fault diagnosis in batch chemical plants is presented. The proposed FDS consists of an ANN structure supplemented with a knowledge based expert system (KBES) in a block-oriented configuration. The system combines the adaptive learning diagnostic procedure of the ANN and the transparent deep knowledge representation of the KBES. The information needed to implement the FDS includes a historical database of past batches, a hazard and operability (HAZOP) analysis and a model of the batch plant. The historical database that includes information related to normal and abnormal operating conditions is used to train the ANN structure. The deviations of the on-line measurements from a reference profile are processed by a multi-scale wavelet in order to determine the singularities of the transients and to reduce the dimensionality of the data. The processed signals are the inputs of an ANN. The ANNs outputs are the signals of the different suspected faults. The HAZOP analysis is useful to build the process deep knowledge base (KB) of the plant. This base relies on the knowledge of the operators and engineers about the process and allows the formulation of artificial intelligence algorithms. The case study corresponds to a batch reactor. The FDS performance is demonstrated through the simulation of different process faults. The FDS proposed is also compared with other approaches based on multi-way principal component analysis. (C) 2000 Elsevier Science Ltd. All rights reserved. | [
"fault diagnosis",
"artificial neural networks",
"batch plants"
] | [
"P",
"P",
"P"
] |
-rSSLuu | A subject-specific technique for respiratory motion correction in image-guided cardiac catheterisation procedures | We describe a system for respiratory motion correction of MRI-derived roadmaps for use in X-ray guided cardiac catheterisation procedures. The technique uses a subject-specific affine motion model that is quickly constructed from a short pre-procedure MRI scan. We test a dynamic MRI sequence that acquires a small number of high resolution slices, rather than a single low resolution volume. Additionally, we use prior knowledge of the nature of cardiac respiratory motion by constraining the model to use only the dominant modes of motion. During the procedure the motion of the diaphragm is tracked in X-ray fluoroscopy images, allowing the roadmap to be updated using the motion model. X-ray image acquisition is cardiac gated. Validation is performed on four volunteer datasets and three patient datasets. The accuracy of the model in 3D was within 5mm in 97.6% of volunteer validations. For the patients, 2D accuracy was improved from 5 to 13mm before applying the model to 24mm afterwards. For the dynamic MRI sequence comparison, the highest errors were found when using the low resolution volume sequence with an unconstrained model. | [
"respiratory motion",
"cardiac",
"catheterisation",
"augmented reality"
] | [
"P",
"P",
"P",
"U"
] |
-LFv:k4 | On linear sets on a projective line | Linear sets generalise the concept of subgeometries in a projective space. They have many applications in finite geometry. In this paper we address two problems for linear sets: the equivalence problem and the intersection problem. We consider linear sets as quotient geometries and determine the exact conditions for two linear sets to be equivalent. This is then used to determine in which cases all linear sets of rank 3 of the same size on a projective line are (projectively) equivalent. In (Donati and Durante, Des Codes Cryptogr, 46:261-267), the intersection problem for subgeometries of PG(n, q) is solved. The intersection of linear sets is much more difficult. We determine the intersection of a subline PG(1, q) with a linear set in PG(1, q (h) ) and investigate the existence of irregular sublines, contained in a linear set. We also derive an upper bound, which is sharp for odd q, on the size of the intersection of two different linear sets of rank 3 in PG(1, q (h) ). | [
"linear sets",
"desarguesian spreads",
"projected subgeometries"
] | [
"P",
"U",
"R"
] |
u1zEosL | N-graphs: Scalable topology and design of balanced divide-and-conquer algorithms | A parallel implementation of binary, balanced divide-and-conquer is derived systematically from its functional specification. The implementation makes use of a new processor topology, called N-graph, and has the following properties: there are not more than 4 links per processor, the processor network is of an arbitrary fixed size, the load is balanced and all communications are local. A parallel mergesort algorithm is used to illustrate the derivation process and the parallel target program with message passing. Experiments on a 64-node transputer network are presented. | [
"divide-and-conquer",
"transputer",
"parallel topologies",
"transformations"
] | [
"P",
"P",
"R",
"U"
] |
4adq1M5 | Bottom-Up Saliency Detection Model Based on Human Visual Sensitivity and Amplitude Spectrum | With the wide applications of saliency information in visual signal processing, many saliency detection methods have been proposed. However, some key characteristics of the human visual system (HVS) are still neglected in building these saliency detection models. In this paper, we propose a new saliency detection model based on the human visual sensitivity and the amplitude spectrum of quaternion Fourier transform (QFT). We use the amplitude spectrum of QFT to represent the color, intensity, and orientation distributions for image patches. The saliency value for each image patch is calculated by not only the differences between the QFT amplitude spectrum of this patch and other patches in the whole image, but also the visual impacts for these differences determined by the human visual sensitivity. The experiment results show that the proposed saliency detection model outperforms the state-of-the-art detection models. In addition, we apply our proposed model in the application of image retargeting and achieve better performance over the conventional algorithms. | [
"saliency detection",
"human visual sensitivity",
"amplitude spectrum",
"fourier transform",
"visual attention"
] | [
"P",
"P",
"P",
"P",
"M"
] |
2f-fSYn | Comments on "Generalized rate monotonic schedulability bounds using relative period ratios" | In this Letter, it is shown that the schedulability test method for task sets with the maximum period ratio larger than or equal to 2 presented in the paper [Wei et al., Generalized rate monotonic schedulability bounds using relative period ratios, Information Processing Letters 107 (5) (2008) 142-148] is not exactly correct by presenting a counter-example. Correct sufficient conditions for using period ratios in RM schedulability test when the maximum period ratio is not less than 2 are also presented. (C) 2010 Elsevier B.V. All rights reserved. | [
"scheduling",
"schedulability test",
"real-time systems",
"real-time scheduling",
"rate-monotonic analysis"
] | [
"P",
"P",
"U",
"M",
"U"
] |
YhLvLfB | Optimal product design using a colony of virtual ants | The optimal product design problem, where the best mix of product features are formulated into an ideal offering, is formulated using ant colony optimization (ACO). Here, algorithms based on the behavior of social insects are applied to a consumer decision model designed to guide new product decisions and to allow planning and evaluation of product offering scenarios. ACO heuristics are efficient at searching through a vast decision space and are extremely flexible when model inputs continuously change. When compared to complete enumeration of all possible solutions, ACO is found to generate near-optimal results for this problem. Prior research has focused primarily on optimal product planning using consumer preference data from a single point in time. Extant literature suggests these formulations are overly simplistic, as a consumers level of preference for a product is affected by past experience and prior choices. This application models consumer preferences as evolutionary, shifting over time. | [
"ant colony optimization (aco)",
"heuristics",
"combinatorial optimization",
"product design/planning",
"swarm intelligence (si)"
] | [
"P",
"P",
"M",
"M",
"M"
] |
4dTDDJ5 | The Skyline algorithm for POMDP value function pruning | We address the pruning or filtering problem, encountered in exact value iteration in POMDPs and elsewhere, in which a collection of linear functions is reduced to the minimal subset retaining the same maximal surface. We introduce the Skyline algorithm, which traces the graph corresponding to the maximal surface. The algorithm has both a complete and an iterative version, which we present, along with the classical Lark's algorithm, in terms of the basic dictionary-based simplex iteration from linear programming. We discuss computational complexity results, and present comparative experiments on both randomly-generated and well-known POMDP benchmarks. | [
"pomdp",
"linear programming",
"dynamic programming"
] | [
"P",
"P",
"M"
] |
-fRKRwU | Traffic-Differentiation-Based Modular QoS Localized Routing for Wireless Sensor Networks | A new localized quality of service (QoS) routing protocol for wireless sensor networks (WSN) is proposed in this paper. The proposed protocol targets WSN's applications having different types of data traffic. It is based on differentiating QoS requirements according to the data type, which enables to provide several and customized QoS metrics for each traffic category. With each packet, the protocol attempts to fulfill the required data- related QoS metric(s) while considering power efficiency. It is modular and uses geographical information, which eliminates the need of propagating routing information. For link quality estimation, the protocol employs distributed, memory and computation efficient mechanisms. It uses a multisink single-path approach to increase reliability. To our knowledge, this protocol is the first that makes use of the diversity in data traffic while considering latency, reliability, residual energy in sensor nodes, and transmission power between nodes to cast QoS metrics as a multiobjective problem. The proposed protocol can operate with any medium access control (MAC) protocol, provided that it employs an acknowledgment (ACK) mechanism. Extensive simulation study with scenarios of 900 nodes shows the proposed protocol outperforms all comparable state-of-the-art QoS and localized routing protocols. Moreover, the protocol has been implemented on sensor motes and tested in a sensor network testbed. | [
"wireless sensor networks",
"quality of service",
"geographical routing",
"distributed protocols"
] | [
"P",
"P",
"R",
"R"
] |
4RmyYqr | Heart rate wavelet coherence analysis to investigate group entrainment | Unobtrusive, wearable sensors that measure physiological signals can provide useful information on an individuals autonomic response. In this paper, we propose techniques to jointly analyze heart rate variations from a group of individuals in order to study group dynamics. We use wavelet coherence to analyze the RR interval signals from a group of N?2 N ? 2 individuals and uncover shared frequency components as a function of time. We study the intrinsic delay and accuracy limitations of a possible real time implementation. We substantiate our analysis with data obtained from Kundalini yoga meditation sessions that reveal a coherence pattern among the individuals as they perform particular prescribed activities. The methodology proposed in this paper may help quantify coherence of the autonomic response of groups involved in numerous everyday activities. | [
"wavelet coherence",
"group dynamics",
"heart rate variability",
"wireless health"
] | [
"P",
"P",
"M",
"U"
] |
5-hgiei | A sliding interface method for unsteady unstructured flow simulations | The objective of this work is to develop a sliding interface method for simulations involving relative grid motion that is fast and efficient and involves no grid deformation, remeshing, or hole cutting. The method is implemented into a parallel, node-centred finite volume, unstructured Viscous flow solver. The rotational motion is accomplished by rigidly rotating the subdomain representing the moving component. At the subdomain interface boundary, the faces along the interface are extruded into the adjacent subdomain to create new volume elements forming a one-cell overlap. These new volume elements are used to compute a flux across the subdomain interface. An interface flux is computed independently for each subdomain. The values of the solution variables and other quantities for the nodes created by the extrusion process are determined by linear interpolation. The extrusion is done so that the interpolation will maintain information as localized as possible. The grid on the interface surface is arbitrary. The boundary between the two subdomains is completely independent from one another; meaning that they do not have to connect in a one-to-one manner and no symmetry or pattern restrictions are placed on the grid. A variety of numerical Simulations were performed on model problems and large-scale applications to examine conservation of the interface flux. Overall solution errors were found to be comparable to that for fully connected and fully conservative simulations. Excellent agreement is obtained with theoretical results and results from other solution methodologies. Copyright (c) 2006 John Wiley & Sons, Ltd. | [
"relative grid motion",
"cfd",
"unstructured grids"
] | [
"P",
"U",
"R"
] |
-dhDBi8 | difference of inflow and outflow based 3d streamline placement | Streamline based method is one of the most important vector field visualization methods. In the past streamline placements algorithms, little physical related feature was considered, which is very important in our opinion. A novel streamline placement algorithm for 3D vector field is introduced in this paper. We measure the difference between the inflow and the outflow to evaluate the local spatial-varying feature at a specified field point. A Difference of Inflow and Outflow Matrix (DIOM) is then calculated to describe the global appearance of the field. We draw streamlines by choosing the local extreme points in DIOM as seeds. DIOM is somewhat like flow divergence and is physics-related thus re-flects intrinsic characteristics of the vector field. The strategy performs well in revealing features of the vector field even with relatively few streamlines both in 3D vector field | [
"streamline placement",
"vector field visualization",
"difference of inflow and outflow matrix",
"seeding strategy"
] | [
"P",
"P",
"P",
"R"
] |
-e7YX-8 | Improving multivariate Homer schemes with Monte Carlo tree search | Optimizing the cost of evaluating a polynomial is a classic problem in comp. titer science. For polynomials in one variable, Homer's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Homer schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two. (C) 2013 Elsevier ay. All rights reserved. | [
"computational techniques"
] | [
"M"
] |
4ALyhDX | A cross-efficiency profiling for increasing discrimination in Data Envelopment Analysis | Data Envelopment Analysis (DEA) cannot provide adequate discrimination among efficient decision making units (DMUs). To discriminate these efficient DMUs is an interesting research subject. The purpose of this paper is to present a Cross-Efficiency Profiling (CEP) model which can be used to improve discrimination power of DEA and conduct a methodological comparison of CEP and the other developed methods without a priori information. CEP retains the original spirit of DEA in trying to extract as much information as possible from the data without requiring pre-selected weights on inputs and outputs. We propose that inputs which are not substitutes for each other be assessed separately and only with respect to outputs which consume them or to which they are otherwise related. In this way input-specific ratings based on the concept of cross-efficiency measure are derived giving a profile for each DMU. We will demonstrate that CEP is more discriminating through an example taken from Baker and Talluri [Computer and Industrial Engineering, 32(1), 101-108 (1997)]. | [
"cross-efficiency",
"data envelopment analysis",
"super-efficiency"
] | [
"P",
"P",
"U"
] |
2pfwRRq | Use of neural networks for dosage individualisation of erythropoietin in patients with secondary anemia to chronic renal failure | The external administration of recombinant human erythropoietin is the chosen treatment for those patients with secondary anemia due to chronic renal failure undergoing periodic hemodialysis. The goal is to carry out an individualised prediction of the erythropoietin dosage to be administered. It is justified because of the high cost of this medication, its secondary effects and the phenomenon of potential resistance which some individuals suffer. One hundred and ten patients were included in this study and several factors were collected in order to develop the neural models. Since the results obtained were excellent, an easy-to-use decision-aid computer application was implemented. | [
"neural networks",
"erythropoietin",
"anemia",
"chronic renal failure",
"therapeutic drug monitoring",
"time-series prediction"
] | [
"P",
"P",
"P",
"P",
"U",
"M"
] |
-Z-kqpa | high performance code compression architecture for the embedded arm/thumb processor | The use of code compression in embedded systems based on standard RISC instruction set architectures (ISA) has been shown in the past to be of benefit in reducing overall system cost. The 16-bit THUMB ISA from ARM Ltd has a significantly higher density than the original 32-bits ARM ISA. Our proposed memory compression architecture has showed a further size reduction of 15\% to 20\% on the THUMB code. In this paper we propose to use a high-speed data lossless hardware decompressor to improve the timing performance of the architecture. We simulated the architecture on the SimpleScalar platform and show that for some applications, the time overheads are limited within 5\% of the original application. | [
"performance",
"code",
"code compression",
"compression",
"architecture",
"embedding",
"processor",
"use",
"embedded systems",
"systems",
"standardization",
"instruction",
"cost",
"memorialized",
"size",
"reduction",
"paper",
"data",
"hardware",
"timing",
"simulation",
"platform",
"applications",
"high-speed hardware decompressor",
"high-performance"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"R",
"U"
] |
-aixU4- | Educating technophile artists and artophile technologists: A successful experiment in higher education | Over the past few decades, the arts have become increasingly dependent on and influenced by the development of computer technology. In the 1960s pioneering artists experimented with the emergent computer technology and more recently the majority of artists have come to use this technology to develop and even to implement their artefacts. The traditional divide between art and technology if it ever existed has been breaking down to the extent that a large number of artists consider themselves to be technophiles. In truth this divide has never existed. Throughout history artists have always used and exploited whatever technology existed and frequently led the development of new technology that would allow them to express their creativity. For instance the ancient Greek word for art was ? ? ? ? ? (techn)the etymological root for the word technology. The divide between the arts and sciences, which we consider to be artificial and harmful, was only introduced in the western educational system in the 19th century and we believe that it is high time that it was bridged or removed altogether. To this end our centre has pioneered a number of university degrees that aim to blur the difference between artists and scientists/technologists. In this paper we explore the design of such courses, taking into account the evolution of the field and the historical development of our centre, and we share with our audience our experiences, successes, and trials and tribulations in implementing such degrees in the area of computer animation, games and digital effects. We present a discussion of the syllabus employed in our highly successful undergraduate degree programme, giving examples of various assignment and assessment forms. Further, we discuss the common issues in educating technophile artists that we have identified on our undergraduate programme and the implications on the students learning experience arising from these. | [
"computer animation education",
"digital effects education",
"computer games education"
] | [
"R",
"R",
"R"
] |
177-vxc | Aluminum alloy profile extrusion simulation using finite volume method on nonorthogonal structured grids | Purpose - The paper aims to use the finite volume method widely used in computational fluid dynamics to avoid the serious remeshing and mesh distortion during aluminium profile extrusion processes simulation when using the finite element method. Block-structured grids are used to fit the complex domain of the extrusion. A finite volume method (FVM) model for aluminium extrusion numerical simulation using non-orthogonal structured grids was established. Design/methodology/approach - The influences of the elements' nonorthogonality on the governing equations discretization of the metal flow in aluminium extrusion processes were fully considered to ensure the simulation accuracy. Volume-of-fluid (VOF) scheme was used to catch the free surface of the unsteady flow: Rigid slip boundary condition was applied on non-orthogonal grids. Findings - This paper involved a simulation of a typical aluminium extrusion process by the FVM scheme. By comparing the simulation by the FVM model established in this paper with the ones simulated by the finite element method (FEM) software Deform-3D and the corresponding experiments, the correctness and efficiency of the FVM model for aluminium alloy profile extrusion processes in this paper was proved. Originality/value - This paper uses the FVM widely used in CFD to calculate the aluminium profile extrusion processes avoiding the remeshing and mesh distortion during aluminium profile extrusion processes simulation when using the finite element method. Block-structured grids with the advantage of simple data structure, small storage and high numerical efficiency are used to fit the complex domain of the extrusion. | [
"alloys",
"finite volume method",
"aluminium",
"flow",
"aluminum profile extrusion",
"non-orthogonal block-structured grids",
"volume-of-fluid scheme"
] | [
"P",
"P",
"P",
"P",
"R",
"R",
"R"
] |
3g-kJrG | Hybrid Parallel Implementation of Inverse Matrix Computation by SMW Formula for Interactive Simulation | In this paper, a hybrid parallel implementation of inverse matrix computation using SMW formula is proposed. By aggregating the memory bandwidth in the hybrid parallel implementation, the bottleneck due to the memory bandwidth limitation in the authors previous multicore implementation has been dissolved. More than 8 times of speed up is also achieved with dual-core 8-nodes implementation which leads more than 20 simulation steps per second, or near real-time performance. | [
"smw formula",
"interactive simulation",
"linear equation solver",
"parallel processing",
"real-time processing"
] | [
"P",
"P",
"U",
"M",
"M"
] |
2pXn4D4 | bitmap algorithms for counting active flows on high speed links | This paper presents a family of bitmap algorithms that address the problem of counting the number of distinct header patterns (flows) seen on a high speed link. Such counting can be used to detect DoS attacks and port scans, and to solve measurement problems. Counting is especially hard when processing must be done within a packet arrival time (8 nsec at OC-768 speeds) and, hence, must require only a small number of accesses to limited, fast memory. A naive solution that maintains a hash table requires several Mbytes because the number of flows can be above a million. By contrast, our new probabilistic algorithms take very little memory and are fast. The reduction in memory is particularly important for applications that run multiple concurrent counting instances. For example, we replaced the port scan detection component of the popular intrusion detection system Snort with one of our new algorithms. This reduced memory usage on a ten minute trace from 50 Mbytes to 5.6 Mbytes while maintaining a 99.77\% probability of alarming on a scan within 6 seconds of when the large-memory algorithm would. The best known prior algorithm (probabilistic counting) takes 4 times more memory on port scan detection and 8 times more on a measurement application. Fundamentally, this is because our algorithms can be customized to take advantage of special features of applications such as a large number of instances that have very small counts or prior knowledge of the likely range of the count. | [
"algorithm",
"activation",
"links",
"links",
"paper",
"families",
"addressing",
"pattern",
"detection",
"attack",
"scan",
"measurement",
"process",
"timing",
"memorialized",
"hash table",
"probabilistic algorithm",
"reduction",
"applications",
"concurrency",
"examples",
"component",
"intrusion detection system",
"traces",
"probability",
"custom",
"feature",
"knowledge",
"network traffic measurement",
"counting flows",
"linking"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"M",
"R",
"P"
] |
1N:NkS1 | optimizing cost and quality by integrating inspection and test processes | Inspections and testing are two of the most commonly performed software quality assurance processes today. Typically, these processes are applied in isolation, which, however, fails to exploit the benefits of systematically combining and integrating them. Expected benefits of such process integration are higher defect detection rates or reduced quality assurance effort. Moreover, when conducting testing without any prior information regarding the system's quality, it is often unclear which parts or which defect types should be prioritized. Existing approaches do not explicitly use information from inspections in a systematical way to focus testing processes. In this article, we present an integrated two-stage approach that routes inspection data to test processes in order to prioritize code classes and defect types. While an initial version of the approach focused on prioritizing code classes, this article focuses on the prioritization of defect types for testing. Results from a case study where the approach was applied on the code level show that those defect types could be prioritized before the testing that afterwards actually showed up most often during the test process. In addition, an overview of related work and an outlook on future research directions are given. | [
"inspection",
"testing",
"defect types",
"case study",
"testing focus",
"odc",
"quality assurance strategy",
"two-stage prioritization"
] | [
"P",
"P",
"P",
"P",
"R",
"U",
"M",
"R"
] |
2S91L54 | Consistency of UML class, object and statechart diagrams using ontology reasoners ? | Reasoning of UML models containing multiple class, object and statechart diagrams. Reasoning of UML models using logic reasoners for the Web Ontology Language OWL 2. We describe how to translate UML class, object and statechart diagrams in OWL 2. We present an automatic tool chain implementing UML to OWL 2 translations. The implemented tool can be used with any standard compliant UML modeling tool. | [
"consistency",
"uml",
"ontology",
"reasoning"
] | [
"P",
"P",
"P",
"P"
] |
4&KwJq6 | approximation algorithms for data placement in arbitrary networks | We study approximation algorithms for placing replicated data in arbitrary networks. Consider a network of nodes with individual storage capacities and a metric communication cost function, in which each node periodically issues a request for an object drawn from a collection of uniform-length objects. We consider the problem of placing copies of the objects among the nodes such that the average access cost is minimized. Our main result is a polynomial-time constant-factor approximation algorithm for this placement problem. Our algorithm is based on a careful rounding of a linear programming relaxation of the problem. We also show that the data placement problem is MAXSNP-hard. We extend our approximation result to a generalization of the data placement problem that models additional costs such as the cost of realizing the placement. We also show that when object lengths are non-uniform, a constant-factor approximation is achievable if the capacity at each node in the approximate solution is allowed to exceed that in the optimal solution by the length of the largest object. | [
"approximation algorithms",
"approximation",
"algorithm",
"data",
"place",
"placement",
"network",
"storage",
"capacities",
"metrication",
"communication",
"cost",
"functional",
"object",
"collect",
"access",
"linear programming",
"general",
"model",
"optimality",
"polynomial",
"factor",
"timing"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U",
"U",
"U"
] |
3Q9rGN: | Semi-supervised sparse feature selection based on multi-view Laplacian regularization ? | Multi-view Laplacian sparse feature selection (MLSFS) algorithm is proposed. Multi-view learning is utilized to exploit the complementation of different views features. A effective iterative algorithm is introduced to optimize the objective function. The convergence of the algorithm is proven. Experiments demonstrate MLSFS has good performance of feature selection. | [
"sparse feature selection",
"laplacian regularization",
"multi-view learning",
"semi-supervised learning"
] | [
"P",
"P",
"P",
"R"
] |
45AM6S4 | Mixture model averaging for clustering | In mixture model-based clustering applications, it is common to fit several models from a family and report clustering results from only the best one. In such circumstances, selection of this best model is achieved using a model selection criterion, most often the Bayesian information criterion. Rather than throw away all but the best model, we average multiple models that are in some sense close to the best one, thereby producing a weighted average of clustering results. Two (weighted) averaging approaches are considered: averaging component membership probabilities and averaging models. In both cases, Occams window is used to determine closeness to the best model and weights are computed within a Bayesian model averaging paradigm. In some cases, we need to merge components before averaging; we introduce a method for merging mixture components based on the adjusted Rand index. The effectiveness of our model-based clustering averaging approaches is illustrated using a family of Gaussian mixture models on real and simulated data. | [
"mixture models",
"model averaging",
"clustering",
"model-based clustering",
"62h30"
] | [
"P",
"P",
"P",
"P",
"U"
] |
2NauN6N | The automorphism group of a binary self-dual doubly even [72,36,16] code is solvable | In this correspondence, we prove that the automorphism group of a putative binary self-dual doubly even [72, 36, 16] code is solvable. Moreover, its order is 5. 7, 10, 14, 56, or a divisor of 72. | [
"automorphisms",
"self-dual codes",
"self-orthogonal codes"
] | [
"P",
"R",
"M"
] |
1P-:Bbm | temporal filtering system to reduce the risk of spoiling a user's enjoyment | This paper proposes a temporal filtering system called the Anti-Spoiler system. The system changes filters dynamically based on user-specified preferences and the user's timetable. The system then blocks contents that would spoil the user's enjoyment of a previously unwatched content. The system analyzes a user-requested Web content, and then uses filters to prevent portions of the content being displayed that might spoil user's enjoyment. For example, the system hides the final score of football from the Web content before watching it on TV. | [
"anti-spoiler",
"web",
"temporal content filtering"
] | [
"P",
"P",
"R"
] |
2bcKrY- | Probability of adequacy evaluation considering power output correlation of renewable generators in Smart Grids | Analytical formulation to assess DGs ability to meet island load is developed. Analytical expressions accounts for load shedding and curtailment policies. The method uses load models encompassing correlation among loads power demand. Generation models with correlation among renewable DGs power outputs are used. The method hourly correlates loads power demands and renewable DGs power outputs. | [
"probability of adequacy",
"power output correlation",
"renewable generators",
"islanding",
"distributed generation",
"power system reliability"
] | [
"P",
"P",
"P",
"P",
"M",
"M"
] |
1R-83xU | Robust Video Restoration by Joint Sparse and Low Rank Matrix Approximation | This paper presents a new patch-based video restoration scheme. By grouping similar patches in the spatiotemporal domain, we formulate the video restoration problem as a joint sparse and low-rank matrix approximation problem. The resulting nuclear norm and l(1) norm related minimization problem can also be efficiently solved by many recently developed numerical methods. The effectiveness of the proposed video restoration scheme is illustrated on two applications: video denoising in the presence of random-valued noise, and video in-painting for archived films. The numerical experiments indicate that the proposed video restoration method compares favorably against many existing algorithms. | [
"low-rank matrix",
"nuclear norm",
"denoising",
"in-painting",
"sparse matrix"
] | [
"P",
"P",
"P",
"P",
"R"
] |