id
stringlengths 7
7
| title
stringlengths 3
578
| abstract
stringlengths 0
16.7k
| keyphrases
sequence | prmu
sequence |
---|---|---|---|---|
1WTK6v8 | Corporate performance of ICT-enabled business process re-engineering | Purpose - The purpose of this paper is to evaluate information and communications technology (ICT) adoption and its impact on business changes and performance. Design/methodology/approach - This paper provides a model interconnecting ICT adoption, ICT-enabled business process re-engineering (BPR), and performance in terms of external and internal organizational motivations with a balanced scorecard approach. The framework is tested using survey data from a sample of 377 chief information officers and senior information system managers. Findings - The results indicate that environment capacity fit and a dynamic environment positively affect technology adoption, which in turn directly triggers business processes changes, organizational learning and growth, while indirectly affecting improvement of customer satisfaction and financial performance. Research limitations/implications - This study is limited by its sample size due to the complexity of the questionnaire. Originality/value - This paper provides empirical evidence to examine how intra- and extra-organizational factors influence ICP adoption, how ICT shapes BPR, and business performance from a dynamic resources-based view. These findings will be valuable in understanding various motivations of ICT adoption, and predicting outcome of business performance stemming from ICT-enabled BPR. | [
"business process re-engineering",
"communications technologies",
"balanced scorecard",
"information technology"
] | [
"P",
"P",
"P",
"R"
] |
-zUUZcs | The accurate inversion of vandermonde matrices | Two modifications are suggested in the commonly used algorithms (such as the O(n(2)) Parker algorithm) for the explicit inversion of Vandermonde matrices resulting in an algorithm whose accuracy is no worse than those of the existing algorithms, but which is significantly more accurate in many pathological situations. The first modification circumvents, to some extent, the subtraction of 'two big like-signed numbers' which in turn reduces round-off errors, while the second modification exploits the structure of the inverse and uses two recursive formulae instead of one to bring about an increase in accuracy. Numerical results are presented to demonstrate the increase in accuracy that results from these two modifications. Although the modified algorithm is always at least as accurate as the Parker algorithm, it does, unfortunately, involve an increase in complexity from O(n(2)) to O(n(3)), so that use of this algorithm to increase the relative accuracy is recommended only in situations where the standard algorithms fail to yield accurate results. (C) 2004 Elsevier Ltd. All rights reserved. | [
"accurate vandermonde inverse"
] | [
"R"
] |
4SfUtr1 | a tool for software development driven by customer interaction | Small IT companies contribute significantly to national economy and have special characteristic features such as limited employee and customer base and very few products with single path of evolution. They survive and grow on strong goodwill of their customers, which is due to regular interaction and support provided to them for installation, operation, maintenance, upgradation and training etc. Workflows of these companies are not process centric instead they are customer interaction centric. In this paper, we present a model of interaction driven software development and a tool to support it. The software development process consists of interaction driven short duration iterations focusing on concurrent activities of coding and related support activities. Each iteration is likely to produce an incremental value to customer in the form of additional functional feature of a product, or a bug fixation, or operation support etc. Members of software development team enjoy lots of autonomy regarding decision making based on facts and their beliefs about products, clients, other colleagues and market environment. Our model is based on some of the concepts of agent modeling such as plan, goal, role, belief, action etc. We have implemented a web based tool using Java and XML which provides functionalities to manage interactions, product feature updates, bug fixing and updating beliefs. It also provides limited facilities for project management. | [
"workflow",
"software development process",
"project management",
"interaction management",
"business process"
] | [
"P",
"P",
"P",
"R",
"M"
] |
-KZTyNS | Spatial reorientation in large and small enclosures: comparative and developmental perspectives | Several vertebrate species, including humans, following passive spatial disorientation appear to be able to reorient themselves by making use of the geometric shape of the environment (i.e., metric properties of surfaces and directional sense). In some circumstances, reliance on such purely geometric information can overcome the use of local featural cues (landmarks). The relative use of geometric and non-geometric information seems to depend upon, among other factors, the size of the experimental space. Evidence in non-human animals and in human infants for primacy in encoding either geometric or landmark information depending on the size of the environment is reviewed, together with possible theoretical accounts of this phenomenon. | [
"spatial reorientation",
"human infants",
"geometry",
"modularity",
"space size",
"chick",
"pigeon",
"fish"
] | [
"P",
"P",
"U",
"U",
"R",
"U",
"U",
"U"
] |
1bTK7Ns | An efficient ear localization technique ? ?? | This paper proposes an efficient technique for automatic localization of ear from side face images. The technique is rotation, scale and shape invariant and makes use of the connected components in a graph obtained from the edge map of the side face image. It has been evaluated on IIT Kanpur database consisting of 2672 side faces with variable sizes, rotations and shapes and University of Notre Dame database containing 2244 side faces with variable background and poor illumination. Experimental results reveal the efficiency and robustness of the technique. | [
"ear localization",
"connected components",
"skin segmentation",
"biometrics",
"ear recognition"
] | [
"P",
"P",
"U",
"U",
"M"
] |
2rw2JLs | Web-based mining of statistical information | This paper outlines the techniques used for automating the process of regularly generating statistical summaries as well as control charts for a large number of semiconductor process steps by integrating the capabilities of SAS, Unix, OpenVMS, Promis and HTML. The resulting information, including over 1200 control charts, is updated daily without human intervention, and made available in a user-friendly manner on the SPC Website. The sole statistician can thus devote his time to activities that need his personal attention. Texas Instruments,(2) is in the process of patenting the automation technology that has been summarized in this paper. (C) 2001 Elsevier Science B.V. All rights reserved. | [
"automation",
"sas",
"spc",
"web page"
] | [
"P",
"P",
"P",
"U"
] |
U5iwQTL | High performance low power CMOS dynamic logic for arithmetic circuits | This paper presents the design of high performance and low power arithmetic circuits using a new CMOS dynamic logic family, and analyzes its sensitivity against technology parameters for practical applications. The proposed dynamic logic family allows for a partial evaluation in a computational block before its input signals are valid, and quickly performs a final evaluation as soon as the inputs arrive. The proposed dynamic logic family is well suited to arithmetic circuits where the critical path is made of a large cascade of inverting gates. Furthermore, circuits based on the proposed concept perform better in high fanout and high switching frequencies due to both lower delay and dynamic power consumption. Experimental results, for practical circuits, demonstrate that low power feature of the propose dynamic logic provides for smaller propagation time delay (3.5 times), lower energy consumption (55%), and similar combined delay, power consumption and active area product (only 8% higher), while exhibiting lower sensitivity to power supply, temperature, capacitive load and process variations than the dynamic domino CMOS technologies. | [
"dynamic logic",
"low power arithmetic circuits",
"cmos digital integrated circuits",
"cmos logic circuits",
"high speed arithmetic circuits"
] | [
"P",
"P",
"M",
"R",
"M"
] |
ga3kjCg | Processes and products in a multi-level metamodeling architecture | Following the successful use of object-oriented metamodeling in the definition of the UML and other notation standards there is increasing interest in extending the approach to cover other concepts of software development, including processes. However, it turns out that the "obvious" approaches for using metamodels to describe processes and artifacts independently do not integrate well together in a natural and straightforward way. In this paper we discuss the problems and inconsistencies than can arise when trying to model a process and the products it creates within the same metamodeling framework, and present a solution that not only avoids many of these problems but also qualifies as a general metamodeling pattern. We then generalize the conceptual architecture to support the sound co-modeling of all independent areas of concern within the context of strict metamodeling. | [
"metamodeling",
"strictness",
"process modeling",
"modeling spaces"
] | [
"P",
"P",
"R",
"M"
] |
-h5n9Df | Short signatures without random oracles and the SDH assumption in bilinear groups | We describe a short signature scheme that is strongly existentially unforgeable under an adaptive chosen message attack in the standard security model. Our construction works in groups equipped with an efficient bilinear map, or, more generally, an algorithm for the Decision Diffie-Hellman problem. The security of our scheme depends on a new intractability assumption we call Strong Diffie-Hellman (SDH), by analogy to the Strong RSA assumption with which it shares many properties. Signature generation in our system is fast and the resulting signatures are as short as DSA signatures for comparable security. We give a tight reduction proving that our scheme is secure in any group in which the SDH assumption holds, without relying on the random oracle model. | [
"digital signatures",
"bilinear pairings",
"strong unforgeability",
"standard model"
] | [
"M",
"M",
"R",
"R"
] |
43BLPn9 | Fuzzy goal programming A parametric approach | Narasimhan incorporated fuzzy set theory within goal formulation in 1980. Since then, much research has been performed in this field, and various models for solving fuzzy goal programming have been proposed. One of the well-known models was proposed by Tiwari et al. in 1987 [19], where an additive model was proposed. This paper is an extension to the Tiwari et al. model that deals with the sum of weighted negative deviations between the desirable achievement degree and the common target. Here, properties of the model are proposed. A numerical example is also given to illustrate the approach. | [
"fuzzy goal programming",
"goal programming",
"weight",
"negative deviation",
"membership function"
] | [
"P",
"P",
"P",
"P",
"U"
] |
AUNSoaD | On distributed fault-tolerant detection in wireless sensor networks | In this paper, we consider two important problems for distributed fault-tolerant detection in wireless sensor networks: 1) how to address both the noise-related measurement error and sensor fault simultaneously in fault-tolerant detection and 2) how to choose a proper neighborhood size n for a sensor node in fault correction such that the energy could be conserved. We propose a fault-tolerant detection scheme that explicitly introduces the sensor fault probability into the optimal event detection process. We mathematically show that the optimal detection error decreases exponentially with the increase of the neighborhood size. Experiments with both Bayesian and Neyman-Pearson approaches in simulated sensor networks demonstrate that the proposed algorithm is able to achieve better detection and better balance between detection accuracy and energy usage. Our work makes it possible to perform energy-efficient fault-tolerant detection in a wireless sensor network. | [
"wireless sensor networks",
"energy-efficiency",
"distributed event detection",
"fault tolerance",
"sensor fusion"
] | [
"P",
"P",
"R",
"M",
"M"
] |
53UuDid | weighted voting for replicated data | In a new algorithm for maintaining replicated data, every copy of a replicated file is assigned some number of votes. Every transaction collects a read quorum of r votes to read a file, and a write quorum of w votes to write a file, such that r + w is greater than the total number of votes assigned to the file. This ensures that there is a non-null intersection between every read quorum and every write quorum. Version numbers make it possible to determine which copies are current. The reliability and performance characteristics of a replicated file can be controlled by appropriately choosing r, w, and the file's voting configuration. The algorithm guarantees serial consistency, admits temporary copies in a natural way by the introduction of copies with no votes, and has been implemented in the context of an application system called Violet. | [
"weighted voting",
"voting",
"replicated data",
"data",
"algorithm",
"transaction",
"read",
"quorum",
"writing",
"intersection",
"version",
"reliability",
"performance",
"configurability",
"consistency",
"context",
"applications",
"systems",
"computer network",
"file suite",
"representative",
"weak representative",
"file system",
"locking"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"U",
"M",
"U",
"U",
"R",
"U"
] |
3wondvp | Acceleration of Euclidean algorithm and rational number reconstruction | We accelerate the known algorithms for computing a selected entry of the extended Euclidean algorithm for integers and, consequently, for the modular and numerical rational number reconstruction problems. The acceleration is from quadratic to nearly linear time, matching the known complexity bound for the integer gcd, which our algorithm computes as a special case. | [
"rational number reconstruction",
"extended euclidean algorithm"
] | [
"P",
"P"
] |
2u&a2H- | RCT: A distributed tree for supporting efficient range and multi-attribute queries in grid computing | Resource discovery is of great importance in grid environments. Most of existing approaches treat all resources equally without any categorizing mechanism. We propose, Resource Category Tree (RCT), which organizes resources based on their characteristics represented by primary attributes (PA). RCT adopts a structure of distributed AVL tree, with each node representing a specific range of PA values. Though RCT adopts a hierarchical structure, it does not require nodes in higher levels maintain more information than those in lower levels, which makes RCT highly scalable. RCT is featured by self-organization, load-aware self-adaptation and fault tolerance. Based on RCT, commonly used queries, such as range queries and multi-attribute queries, are well supported. We conduct performance evaluations through comprehensive simulations. | [
"multi-attribute query",
"grid computing",
"resource discovery",
"range query"
] | [
"P",
"P",
"P",
"P"
] |
-hdWv6w | Information systems continuance intention of web-based applications customers: The case of online banking | The proliferation of the Internet has not only allowed businesses to offer their products and services through web-based applications, but it has also undermined their ability to retain their customers. It has reduced search costs, opened up barriers to entry, and diminished distinctiveness of firms. Effective retention of customers allows firms to grow in size and popularity, thereby increasing their profitability. We extended CommitmentTrust theory, an expectationconfirmation model, and technology acceptance theory to develop a model of IS continuance intention of customers of web-based applications. Relationship commitment and trust were found to be central to IS continuance intention. Also, perceived empowerment influenced relationship commitment, while perceived security influenced trust. Our findings thus supported traditional intention factors, highlighting the role of trust as a stronger predictor of intention than commitment but, contradicting findings from marketing research, trust was found to be a stronger predictor of retention in the e-commerce context. | [
"web-based application",
"retention",
"commitment",
"commitmenttrust theory",
"trust",
"end-user relationship",
"relationship marketing"
] | [
"P",
"P",
"P",
"P",
"P",
"M",
"R"
] |
2Q17kx: | 2PSM: an efficient framework for searching video information in a limited-bandwidth environment | We present a novel technique, called 2-Phase Service Model, for streaming videos to home users in a limited-bandwidth environment. This scheme first delivers some number of non-adjacent data fragments to the client in Phase 1. The missing fragments are then transmitted in Phase 2 as the client is playing back the video. This approach offers many benefits. The isochronous bandwidth required for Phase 2 can be controlled within the capability of the transport medium. The data fragments received during Phase 1 can be used to provide an excellent preview of the video. They can also be used to facilitate VCR-style operations such as fast-forward and fast-reverse. Systems designed based on this method are less expensive because the fast-forward and fast-reverse versions of the video files an no longer needed. Eliminating these files also improves system performance because mapping between the regular files and their fast-forward and fast-reverse versions is no longer part of the VCR operations. Furthermore, since each client machine handles its own VCR-style interaction, this technique is very scalable. We provide simulation results to show that 2-Phase Service Model is able to handle VCR functions efficiently. We also implement a video player called FRVplayer. With this prototype, we are able to judge that the visual quality of the previews and VCR-style operations is excellent. These features are essential to many important applications. We discuss the application of FRVplayer in the design of a video management system, called VideoCenter. This system is intended for Internet applications such as digital video libraries. | [
"previewing",
"vcr-style interaction",
"video library",
"data organization",
"video on demand",
"world wide web"
] | [
"P",
"P",
"P",
"M",
"M",
"U"
] |
-D1mk8X | Structural stability of silicene-like nanotubes | Silicene-like (4,0) ( 4 , 0 ) zigzag metal-doped MnSi8(n+1) M n Si 8 ( n + 1 ) nanotubes are investigated by first-principles calculations. We show that the geometrical structures of silicon nanotubes can be stabilized by doping metal (K, Ca, Y and Lu). Electronic structure calculations show that Y and Lu atoms gain extra charge from Si atoms, the bonding between Si and the MnSi8(n+1) M n Si 8 ( n + 1 ) (M=K, Ca) is of a mixed metalliccovalent nature, and the magnetic moment of K14Si120 quenches completely compared with K7Si64. Some properties are discussed to provide guidance to experimental efforts for nanomagnetic materials and spintronics. | [
"silicene-like nanotube",
"metal encapsulated",
"first principles"
] | [
"P",
"M",
"U"
] |
4c1vqem | Estimation of Chinese agricultural production efficiencies with panel data | Fast and steady economic growth in China during the 1990s attracted much international attention. Given the scarcity of resources, it is important for economic growth to depend on production efficiency improvement to achieve sustainability. As China is the world's second largest foreign capital recipient, foreign capital plays an important role in investment. If economic growth is fuelled by investment, an exodus or a shortage of foreign capital will render growth unsustainable. However, if growth is propelled by improvements in production efficiency, it is more likely to be sustained and to withstand reduction in production input. This paper estimates production efficiency in the agricultural sector in China with a panel data set comprising 30 provinces for the 7-year period, 1991-1997. A panel data model based on the Cobb-Douglas production function is used to represent the production frontier and to compute technical efficiency at the provincial level. Individual effects are tested to determine if pooled estimation is preferred to unpooled (panel) estimation. The test confirms significant differences between the provinces, and hence warrants panel data estimation. Both fixed and random effects models are estimated, with provincial technical inefficiency specified as province-specific intercept terms for the former, and regression disturbances for the latter. Although the random effects model is rejected in favour of the fixed effects model, the latter did not produce estimates with correct signs, and is rejected on economic grounds. Using the random effects model, production efficiency has increased for most provinces, but the gap between the affluent coastal region and the hinterland in the west has increased. (c) 2005 IMACS. Published by Elsevier B.V. All rights reserved. | [
"panel data",
"production frontier",
"random effects",
"fixed effects",
"time varying"
] | [
"P",
"P",
"P",
"P",
"U"
] |
-TUiVT& | On the use of supervised features for unsupervised image categorization: An evaluation ? | We compared high- and low-level features for unsupervised image categorization. We verified that high-level features significantly outperform low-level features. We assessed how much the performance depends on the dimensionality of the feature vectors. We verified that a simple clustering on supervised features outperform strategies specifically designed for this task. | [
"supervised features",
"unsupervised image categorization",
"primitive features",
"image clustering"
] | [
"P",
"P",
"M",
"R"
] |
1Jjn8k: | a stable fixed-outline floorplanning method | In this paper, we propose a stable fixed-outline floorplanning method(IARFP). An elaborated method for perturbing solutions, Insertion after Remove(IAR), is devised for the simulated annealing. The IAR operation uses the technique of enumerating positions in Sequence Pair and greatly accelerates the searching. Moreover, based on the analysis of diverse objective functions used in the existing researches, we suggest a new objective function, which is still effective when combined with other objectives, for the fixed-outline floorplanning. Compared with the previous fixed-outline floorplanners, the proposed method is effective and efficient. Experiments showed that the proposed fixed-outline floorplanner achieved 100\% success rate efficiently when optimizing area and wire-length simultaneously, while getting much smaller wirelength. On the other hand, we validated once more by experiments that aspect ratio close to one is beneficial to wire-length. | [
"fixed-outline",
"floorplanning",
"sequence pair"
] | [
"P",
"P",
"P"
] |
xH28ZLg | Torque optimizing control with singularity-robustness for kinematically redundant robots | A new control method for kinematically redundant manipulators having the properties of torque-optimality and singularity-robustness is developed. A dynamic control equation, an equation of joint torques that should be satisfied to get the desired dynamic behavior of the end-effector, is formulated using the feedback linearization theory. The optimal control law is determined by locally optimizing an appropriate norm of joint torques using the weighted generalized inverses of the manipulator Jacobian-inertia product. In addition, the optimal control law is augmented with fictitious joint damping forces to stabilize the uncontrolled dynamics acting in the null-space of the Jacobian-inertia product. This paper also presents a new method for the robust handling of robot kinematic singularities in the context of joint torque optimization. Control of the end-effector motions in the neighborhood of a singular configuration is based on the use of the damped least-squares inverse of the Jacobian-inertia product. A damping factor as a function of the generalized dynamic manipulability measure is introduced to reduce the end-effector acceleration error caused by the damping. The proposed control method is applied to the numerical model of SNU-ERC 3-DOF planar direct-drive manipulator. | [
"singularity-robustness",
"kinematically redundant manipulators",
"torque-optimality",
"dynamic control equation",
"weighted generalized inverses",
"jacobian-inertia product",
"damped least-squares inverses",
"generalized dynamic manipulability measure"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
vRQXTcy | A note on rationality conditions of fuzzy choice functions ? | This note investigates the rationality conditions of fuzzy choice functions. For studying the rationality of fuzzy choice functions, Banerjee introduced many regularity conditions that are defined for revealed preference relations. However, in the standard framework of economics and decision-making theory researchers usually assume people's choice behavior is observable, while the preference (behind the choice behavior) is unobservable. It is worthwhile to study rationality conditions that are defined for fuzzy choice functions instead that for revealed preference relations. For this purpose, we propose a set of new rationality conditions depending only on the fuzzy choice function. We study the connection among these rationality conditions and give a relatively complete description for them. We also find that Banerjee-rationality is too weak to capture some kind of consistency of the fuzzy choice function. This motivates us to introduce two notions on strong rationality. The relationships between our new consistency conditions and the rationality (Banerjee-rationality and strong rationality) of the fuzzy choice function are discussed. | [
"rationality",
"fuzzy choice function",
"fuzzy revealed preference relation"
] | [
"P",
"P",
"R"
] |
ZrB::2C | An optical study of alumina films thermal evolution upon ammonia annealing | The physical and structural evolution of alumina films deposited by ALCVD annealed at high temperatures in N2 has been studied. Low temperature post deposition treatments in NH3 (PDN) have been performed to evaluate the impact of nitrogen incorporation in the alumina film on its thermal stability. Thermal evolution has been studied by deep UV spectroscopic ellipsometry and grazing X-ray reflectance techniques. AFM measurements were also performed to confirm and complete the ellipsometric and GXR analysis. The change of the crystalline structure was detected by ellipsometry by the different UV refractive index, while the GXR provided a unique thickness evaluation. It was therefore possible to determine the layer densification after the thermal treatment and the impact of the PDN on the transition temperature. | [
"alumina",
"ellipsometry",
"high-k"
] | [
"P",
"P",
"U"
] |
12rmcTN | Evaluating the Vulnerability of Network Traffic Using Joint Security and Routing Analysis | Joint analysis of security and routing protocols in wireless networks reveals vulnerabilities of secure network traffic that remain undetected when security and routing protocols are analyzed independently. We formulate a class of continuous metrics to evaluate the vulnerability of network traffic as a function of security and routing protocols used in wireless networks. We develop two complementary vulnerability definitions using set theoretic and circuit theoretic interpretations of the security of network traffic, allowing a network analyst or an adversary to determine weaknesses in the secure network. We formalize node capture attacks using the vulnerability metric as a nonlinear integer programming minimization problem and propose the GNAVE algorithm, a Greedy Node capture Approximation using Vulnerability Evaluation. We discuss the availability of security parameters to the adversary and show that unknown parameters can be estimated using probabilistic analysis. We demonstrate vulnerability evaluation using the proposed metrics and node capture attacks using the GNAVE algorithm through detailed examples and simulation. | [
"security",
"routing",
"wireless networks",
"node capture attacks",
"adversary models"
] | [
"P",
"P",
"P",
"P",
"M"
] |
-QeKZ2L | The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing | "Approximate message passing" (AMP) algorithms have proved to be effective in reconstructing sparse signals from a small number of incoherent linear measurements. Extensive numerical experiments further showed that their dynamics is accurately tracked by a simple one-dimensional iteration termed state evolution. In this paper, we provide rigorous foundation to state evolution. We prove that indeed it holds asymptotically in the large system limit for sensing matrices with independent and identically distributed Gaussian entries. While our focus is on message passing algorithms for compressed sensing, the analysis extends beyond this setting, to a general class of algorithms on dense graphs. In this context, state evolution plays the role that density evolution has for sparse graphs. The proof technique is fundamentally different from the standard approach to density evolution, in that it copes with a large number of short cycles in the underlying factor graph. It relies instead on a conditioning technique recently developed by Erwin Bolthausen in the context of spin glass theory. | [
"compressed sensing",
"state evolution",
"message passing algorithms",
"density evolution",
"random matrix theory"
] | [
"P",
"P",
"P",
"P",
"M"
] |
2towFcL | Translation-invariant semicopula-based integrals: The solution to Hutnk's open problem | In this note, we give a solution to Problem 9.2, which was presented by Mesiar and Stup?anov (2015) [6]. We show that the class of semicopulas solving Problem 9.2 contains only the ?ukasiewicz t-norm. | [
"semicopula",
"integral",
"capacity",
"?ukasiewicz t-norm",
"opposite-sugeno integral"
] | [
"P",
"P",
"U",
"M",
"M"
] |
3WxKbC8 | A new TCP for persistent packet reordering | Most standard implementations of TCP perform poorly when packets are reordered. In this paper, we propose a new version of TCP that maintains high throughput when reordering occurs and yet, when packet reordering does not occur, is friendly to other versions of TCP. The proposed TCP variant, or TCP-PR, does not rely on duplicate acknowledgments to detect a packet loss. Instead, timers are maintained to keep track of how long ago a packet was transmitted. In case the corresponding acknowledgment has not yet arrived and the elapsed time since the packet was sent is larger than a given threshold, the packet is assumed lost. Because TCP-PR does not rely on duplicate acknowledgments, packet reordering (including out-or-order acknowledgments) has no effect on TCP-PR's performance. Through extensive simulations, we show that TCP-PR performs consistently better than existing mechanisms that try to make TCP more robust to packet reordering. In the case that packets are not reordered, we verify that TCP-PR maintains the same throughput as typical implementations of TCP (specifically, TCP-SACK) and shares network resources fairly. Furthermore, TCP-PR only requires changes to the TCP sender side making it easier to deploy. | [
"packet reordering",
"congestion control",
"transport protocols"
] | [
"P",
"U",
"U"
] |
Zmcp9qS | Catalytic activities of dismution reactions of Cu(bpy)Br-2 compound and its derivatives as SOD mimics: A theoretical study | The systematical investigations on the catalytic mechanisms of dismutation reactions for the superoxide dismutase (SOD) mimics of Cu(bpy)Br-2 and its derivatives Cu(L-1)Br-2 and Cu(L-2)Br-2 (bpy=2,2'- dipyridyl, L-1=5,5'- di[1- (triethylammonio)methyl]- 2,2'- dipyridyl cation and L-2=5,5'- di [1- (tributylammonio)methyl]- 2,2'- dipyridyl cation) have been carried out by the DFT/UB3LYP method. The catalytic reaction for each of these compounds is confirmed to be a redox cycle consisting of two half-reactions. In the first half-reaction, a proton is transferred from hydroperoxide neutral radical (center dot OOH) to one nitrogen atom of pyridinic ring with Cu(II) being reduced to Cu(I) in the meantime. In the second half-reaction, the proton is transferred back to another hydroperoxide radical (center dot OOH) to form hydrogen peroxide molecule, oxidizing Cu(I) back to its initial state. Our results show that the first half-reaction for all reactions is the rate-controlling step with the forward barrier values of 6.61, 4.84, 3.79 kcal center dot mol(-1) for Cu(bpy)Br-2, Cu(L-1)Br-2, and Cu(L-2)Br-2, respectively. Consequently, the SOD-like activities of the three mimics are in the order of Cu(bpy)Br-2 < Cu(L-1)Br-2 < Cu(L-2)Br-2. The effect factors on the SOD-like activity for the studied compounds have also been discussed. | [
"superoxide dismutase",
"dft/ub3lyp",
"sod-like activity",
"dismutation mechanism"
] | [
"P",
"P",
"P",
"R"
] |
-1fRtko | Maintenance of a piercing set for intervals with applications | We show how to maintain efficiently a minimum piercing set for a set S of intervals on the line, under insertions and deletions to/from S. A linear-size dynamic data structure is presented, which enables us to compute a new minimum piercing set following an insertion or deletion in time O(c(S) log \S\), where c(S) is the size of the new minimum piercing set. We also show how to maintain a piercing set for S of size at most (1 + epsilon)c(S), for 0 < epsilon less than or equal to 1, in (O) over bar((log\S\)/epsilon) amortized time per update. We then apply these results to obtain efficient solutions to the following three problems: (i) the shooter location problem, (ii) computing a minimum piercing set for arcs on a circle, and (iii) dynamically maintaining a box cover for a d-dimensional point set. | [
"piercing set",
"geometric optimization",
"dynamic algorithms"
] | [
"P",
"U",
"M"
] |
-vvcw-x | Using time-driven activity-based costing to manage digital forensic readiness in large organisations | A digital forensic readiness (DFR) programme consists of a number of activities that should be chosen and managed with respect to cost constraints and risk. Traditional cost systems, however, can not provide the cost of individual activities. This makes it difficult or impossible for organisations to consider cost when making decisions about specific activities. In this paper we show that the relatively new cost system, time-driven activity-based costing (TDABC), can be used to determine the cost of implementing and managing activities required for DFR. We show through analysis and simulation that the cost information from a TDABC model can be used for such decisions. We also discuss some of the factors that ought to be considered when implementing or managing the use of TDABC in a large organisation. | [
"time-driven activity-based costing",
"digital forensic readiness",
"forensics management",
"cost management"
] | [
"P",
"P",
"R",
"R"
] |
3EpKKps | Biochemical fluctuations, optimisation and the linear noise approximation | Stochastic fluctuations in molecular numbers have been in many cases shown to be crucial for the understanding of biochemical systems. However, the systematic study of these fluctuations is severely hindered by the high computational demand of stochastic simulation algorithms. This is particularly problematic when, as is often the case, some or many model parameters are not well known. Here, we propose a solution to this problem, namely a combination of the linear noise approximation with optimisation methods. The linear noise approximation is used to efficiently estimate the covariances of particle numbers in the system. Combining it with optimisation methods in a closed-loop to find extrema of covariances within a possibly high-dimensional parameter space allows us to answer various questions. Examples are, what is the lowest amplitude of stochastic fluctuations possible within given parameter ranges? Or, which specific changes of parameter values lead to the increase of the correlation between certain chemical species? Unlike stochastic simulation methods, this has no requirement for small numbers of molecules and thus can be applied to cases where stochastic simulation is prohibitive. | [
"optimisation",
"linear noise approximation",
"mitogen-activated kinases signalling",
"copasi",
"intrinsic noise",
"stochastic biochemical models",
"systems biology"
] | [
"P",
"P",
"U",
"U",
"M",
"R",
"M"
] |
1L&xTww | A new iris recognition method using independent component analysis | In a conventional method based on quadrature 2D Gabor wavelets to extract iris features, the iris recognition is performed by a 256-byte iris code, which is computed by applying the Gabor wavelets to a given area of the iris. However, there is a code redundancy because the iris code is generated by basis functions without considering the characteristics of the iris texture. Therefore, the size of the iris code is increased unnecessarily. In this paper we propose a new feature extraction algorithm based on independent component analysis (ICA) for a compact iris code. We implemented the ICA to generate optimal basis functions which could represent iris signals efficiently. In practice the coefficients, of the ICA expansions are used as feature vectors. Then iris feature vectors are encoded into the iris code for storing and comparing individual's iris patterns. Additionally, we introduce a method to refine the ICA basis functions for improving the recognition performance. Experimental results show that our proposed method has a similar equal error rate as a conventional method based on the Gabor wavelets, and the iris code size of our proposed methods is five times smaller than that of the Gabor wavelets. | [
"iris recognition",
"independent component analysis",
"feature extraction",
"biometrics"
] | [
"P",
"P",
"P",
"U"
] |
1itboEB | optimizing metrics in police routing algorithms | A large part of the mission of state troopers is to prevent traffic accidents and to quickly respond to the accidents that do happen. However, driving about aimlessly during their shift is not efficient. Certain areas can be identified as "hotspots", places where crashes are known to frequently occur. It is advantageous to have officers target these critical locations during their patrol routes. Multiple officers taking similar routes is also inefficient. The number of officers patrolling is limited, and by keeping them spread out, response time to crashes can be decreased. The purpose of the Turn programming language is to create efficient routes daily, but with a degree of randomness to prevent the routes from becoming predictable. At its core is a graph representing the roads of Alabama, with vertices at each milepost and intersection. Turn programs utilize set reduction functions to choose what vertices officers should patrol. Depending on what functions the programmer uses and the order they are used, the route may be different to reflect the changing priorities. A Turn program's worth is measured by a number of metrics, such as how many hotspots were covered each day, how long those hotspots were patrolled, and time taken to respond to crashes in the simulation. Additionally, a program is worthless if the routes it creates are not realistic. In this paper, we present an analysis of various Turn programs, explain how they affect the metrics, and show a program that strikes a balance between them. | [
"police",
"state trooper",
"hotspot",
"patrol routes",
"graph",
"vertex"
] | [
"P",
"P",
"P",
"P",
"P",
"U"
] |
48375S: | A precise capacitance-to-pulse width converter for integrated sensors | This work describes a novel approach for interfacing capacitive sensors in the sub-pF range. The system generates a PWM signal with a linear relationship between the pulse duration and the sensor capacitance. The circuit exhibits intrinsic low sensitivity to temperature and process variations and is therefore an interesting solution when extremely wide operating temperature ranges are required. A detailed analysis of the noise characteristics, aimed to give indications about the circuit optimisation, is presented. The interface has been designed using the 0.35 mum BCD6 process of STMicroelectronics and tested by means of electrical simulations. | [
"capacitive sensor",
"sensor interface",
"pulse width modulation"
] | [
"P",
"R",
"M"
] |
21&CFjH | Data mining based intelligent analysis of threatening e-mail | This paper proposed a decision tree based classification method to detect e-mails that contain terrorism information. The proposed classification method is an incremental and user-feedback based extension of a decision tree induction algorithm named Ad Infinitum. We show that Ad Infinitum algorithm is a good choice for threatening e-mail detection as it runs fast on large and high dimensional databases, is easy to tune and is highly accurate, outperforming popular algorithms such as Decision Trees, Support Vector Machines and Naive Bayes. In particular, we are interested in detecting fraudulent and possibly criminal activities from such e-mails. | [
"data mining",
"classification",
"threatening e-mail detection"
] | [
"P",
"P",
"P"
] |
4KQWBFF | Total factor productivity growth in Ugandas telecommunications industry | The telecommunication sector is usually thought to be characterized by high productivity growth rates arising from increasing returns to scale. The actual productivity patterns in the sector, however, need to be empirically determined. A panel data set was assembled and a common set of input and output indicators was constructed to support the estimation of the Malmquist Total Factor Productivity index via input-oriented Data Envelopment Analysis. A general specification encompassing all available input and output data was employed to obtain the average total factor productivity changes for the sector. Over the study period, there was total factor productivity growth in Ugandas telecommunications industry, which was mainly due to technical or technological progress as opposed to technical efficiency. These results indicate the existence of a potential for tariff reduction via the X-factor in the price cap formula. | [
"total factor productivity",
"telecommunications",
"malmquist",
"data envelopment analysis"
] | [
"P",
"P",
"P",
"P"
] |
1BH9Yqc | Nested structure in parameterized rough reduction | In this paper, by strict mathematical reasoning, we discover the relationship between the parameters and the reducts in parameterized rough reduction. This relationship, named the nested reduction, shows that the reducts act as a nested structure with the monotonically increasing parameter. We present a systematic theoretical framework that provides some basic principles for constructing the nested structure in parameterized rough reduction. Some specific parameterized rough set models in which the nested reduction can be constructed are pointed out by strict mathematical reasoning. Based on the nested reduction, we design several quick algorithms to find a different reduct when one reduct is already given. Here different refers to the reducts obtained on the different parameters. All these algorithms are helpful for quickly finding a proper reduct in the parameterized rough set models. The numerical experiments demonstrate the feasibility and the effectiveness of the nested reduction approach. | [
"nested structure",
"parameterized rough sets",
"attribute reduction",
"variable precision"
] | [
"P",
"P",
"M",
"U"
] |
-rxJoGG | Dual path communications over multiple spanning trees for networked control systems | The switched Ethernet networks are more and more deployed in industry. The Spanning Tree Protocol implemented in the switches enables management of the link connectivity. But the reconfiguration time of the Spanning Tree Protocol (STP) when link failure occurs is not adapted to satisfy industrial constraints. The objective of this paper is to propose a method based only on standard, mitigating the probability of disconnection between nodes having hard real-time properties. The approach developed in this paper consists of duplicating frames and of forwarding them on different paths. These paths are optimized and specified by using genetic algorithms. OPNET simulations show the interest of this proposal on a particular Networked Control System. | [
"genetic algorithm",
"real-time systems",
"fault tolerance",
"spanning-tree",
"switched networks"
] | [
"P",
"R",
"U",
"U",
"R"
] |
ALV9tqQ | A study about the efficiency of formal high-level synthesis applied to verification ? | The use of a formal synthesis system is proposed as an efficient alternative for the formal verification of RT-level circuits obtained from algorithmic-level specifications by high-level synthesis (HLS) tools. The goal of the proposal is to recreate, within the formal synthesis system, any design process performed by an external HLS tool in order to check its correctness. The mean is the utilization of the post-synthesis reports given by HLS tools to guide the derivation process into the formal synthesis system. The paper places particular emphasis in two aspects: to give a comprehensive vision of the formal scenario, and to demonstrate its practical viability. In relation with the former aspect, the methodology is detailed and the architecture of the whole system is summarized (including specification mechanisms, derivation rules, HLS tasks formalization, automated derivation procedure, etc.). With respect to the latter one, a theoretical study (confirmed by a set of experiments) shows that the formal derivation process has quadratic and linear complexity in terms of time and memory consumption, respectively. Finally, the paper concludes that following the proposal, even commercial HLS processes can be verified with a reduced overhead (5% in average) without modifying the HLS tools. | [
"high-level synthesis",
"formal synthesis",
"rt-level formal verification",
"joint design and verification cycle",
"formal verification overhead"
] | [
"P",
"P",
"R",
"M",
"R"
] |
oaEe3TZ | Constitutive property behavior of an ultra-high-performance concrete with and without steel fibers | A laboratory investigation was conducted to characterize the constitutive property behavior of Cor-Tuf, an ultra-high-performance composite concrete. Mechanical property tests (hydrostatic compression, unconfined compression (UC), triaxial compression (TXC), unconfined direct pull (DP), uniaxial strain, and uniaxial-strain-load/constant-volumetric-strain tests) were performed on specimens prepared from concrete mixtures with and without steel fibers. From the UC and TXC test results, compression failure surfaces were developed for both sets of specimens. Both failure surfaces exhibited a continuous increase in maximum principal stress difference with increasing confining stress. The DP tests results determined the unconfined tensile strengths of the two mixtures. The tensile strength of each mixture was less than the generally assumed tensile strength for conventional strength concrete, which is 10 percent of the unconfined compressive strength. Both concretes behaved similarly, but Cor-Tuf with steel fibers exhibited slightly greater strength with increased confining pressure, and Cor-Tuf without steel fibers displayed slightly greater compressibility. | [
"ultra-high-performance concrete",
"steel fibers",
"mechanical response"
] | [
"P",
"P",
"M"
] |
-upfiDu | Approximating the larger eddies in fluid motion V: Kinetic energy balance of scale similarity models | We first review a classical scale-similarity model used to simulate the motion of large eddies in a turbulent flow. The kinetic energy balance of this model is very unclear in theory. Experiments with it often have reported that an additional Smagorinski type subgridscale term is needed. This term is not benign; it can alter significantly the predicted long term dynamics of the large eddies. However, we also show that the principal of scale-similarity (introduced in 1980 by Bardina, Ferziger and Reynolds) can also give rise to other scale similarity models which have the correct kinetic energy balance. (C) 2000 Elsevier Science Ltd. All rights reserved. | [
"scale similarity",
"turbulence",
"navier-stokes",
"large eddy simulation"
] | [
"P",
"P",
"U",
"R"
] |
3ttdHnk | LyashkoLooijenga morphisms and submaximal factorizations of a Coxeter element | When W is a finite reflection group, the noncrossing partition lattice \(\operatorname{NC}(W)\) of type W is a rich combinatorial object, extending the notion of noncrossing partitions of an n-gon. A formula (for which the only known proofs are case-by-case) expresses the number of multichains of a given length in \(\operatorname{NC}(W)\) as a generalized FuCatalan number, depending on the invariant degrees of W. We describe how to understand some specifications of this formula in a case-free way, using an interpretation of the chains of \(\operatorname{NC}(W)\) as fibers of a LyashkoLooijenga covering (\(\operatorname{LL}\)), constructed from the geometry of the discriminant hypersurface of W. We study algebraically the map \(\operatorname{LL}\), describing the factorizations of its discriminant and its Jacobian. As byproducts, we generalize a formula stated by K. Saito for real reflection groups, and we deduce new enumeration formulas for certain factorizations of a Coxeter element of W. | [
"coxeter element",
"noncrossing partition lattice",
"fucatalan number",
"lyashkolooijenga covering",
"finite coxeter group",
"complex reflection group"
] | [
"P",
"P",
"P",
"P",
"R",
"M"
] |
34&Ydi4 | Differential quadrature element analysis using extended differential quadrature | The extended differential quadrature (EDQ) has been proposed. A certain order derivative or partial derivative of the variable function with respect to the coordinate variables at an arbitrary discrete point is expressed as a weighted linear sum of the values of function and/or its possible derivatives at all grid nodes. The grid pattern can be Axed while the selection of discrete points for defining discrete fundamental relations is flexible. This method can be used to the differential quadrature element and generalized differential quadrature element analyses. (C) 2000 Elsevier Science Ltd. All rights reserved. | [
"differential quadrature",
"extended differential quadrature",
"generic differential quadrature",
"differential quadrature element method",
"generalized differential quadrature element method",
"weighting coefficients"
] | [
"P",
"P",
"P",
"R",
"R",
"M"
] |
47ynm:H | The power of prediction with social media | Purpose - Social media provide an impressive amount of data about users and their interactions, thereby offering computer and social scientists, economists, and statisticians - among others - new opportunities for research. Arguably, one of the most interesting lines of work is that of predicting future events and developments from social media data. However, current work is fragmented and lacks of widely accepted evaluation approaches. Moreover, since the first techniques emerged rather recently, little is known about their overall potential, limitations and general applicability to different domains. Therefore, better understanding the predictive power and limitations of social media is of utmost importance. Design/methodology/approach - Different types of forecasting models and their adaptation to the special circumstances of social media are analyzed and the most representative research conducted up to date is surveyed. Presentations of current research on techniques, methods, and empirical studies aimed at the prediction of future or current events from social media data are provided. Findings - A taxonomy of prediction models is introduced, along with their relative advantages and the particular scenarios where they have been applied to. The main areas of prediction that have attracted research so far are described, and the main contributions made by the papers in this special issue are summarized. Finally, it is argued that statistical models seem to be the most fruitful approach to apply to make predictions from social media data. Originality/value - This special issue raises important questions to be addressed in the field of social media-based prediction and forecasting, fills some gaps in current research, and outlines future lines of work. | [
"predicting",
"social media",
"forecasting",
"computational social science"
] | [
"P",
"P",
"P",
"M"
] |
b8itcKa | Usage patterns of an electronic theses and dissertations system | The Korea Institute of Science and Technology Information (KISTI) Electronic Theses and Dissertations (ETD) system is a national digital library of ETDs in South Korea. Provides a comprehensive picture of how this system has been used during its first two years in service since June 1999. The system transaction logs were collected and analysed to reveal the usage patterns of the system. Results of the study indicate that the KISTI ETD system usage has seen a significant increase since its second year. While most users appear to be domestic users, the system also attracts users from many other countries, suggesting that the KISTI ETD system has become a part of the international networked digital library of theses and dissertations. It was also found that there are a very large number of one-time visitors to the KISTI ETD system. Nevertheless, the system began to maintain a frequent user group in its second year in service. The usage statistics regarding system features and the characteristics of users' visits indicated that the search function was by far the most frequently used system function. Finally, discusses the implications of the findings. | [
"korea",
"libraries",
"internet",
"academic libraries",
"systems design",
"transactional analysis"
] | [
"P",
"P",
"U",
"M",
"M",
"M"
] |
3eX7Vj1 | Transaction fusion: A model for data recovery from information attacks | The escalation of electronic attacks on databases in recent times demands fast and efficient recovery methods. The existing recovery techniques are too time-consuming as they first undo all malicious and affected transactions individually, and then redo all affected transactions, again, individually. In this paper, we propose a method that accelerates the undo and redo phases of the recovery. The method developed involves combining or fusing malicious or affected transactions occurring in groups. These fused transactions are executed during undo and redo phases instead of execution of individual transactions. By fusing relevant transactions into a single transaction, the number of operations such as start, commit, read, and write are minimized. Thus, data items which were required to be accessed multiple times in case of individual transactions are accessed only once in a fused transaction. The amount of log I/O's is reduced. This expedites the recovery procedure in the event of information attacks. A simulation analysis of the proposed model confirmed our claim. | [
"affected transaction",
"fused transaction",
"malicious transaction",
"schedule",
"damage assessment and recovery"
] | [
"P",
"P",
"R",
"U",
"M"
] |
4RrSetd | Approximate polytope ensemble for one-class classification | The methodology uses a convex-hull for modeling one-class classification problems. Random projections are used to approximate the convex-hull in high dimensional spaces. Expansions of the approximate hulls are considered to set the optimal operating point. Exhaustive validation is performed on three different typologies of problems. | [
"one-class classification",
"random projections",
"convex hull",
"high-dimensionality",
"ensemble learning"
] | [
"P",
"P",
"M",
"U",
"M"
] |
3pk:Ko& | Detection of sparse targets with structurally perturbed echo dictionaries | In this paper, a novel algorithm is proposed to achieve robust high resolution detection in sparse multipath channels. Currently used sparse reconstruction techniques are not immediately applicable in multipath channel modeling. Performance of standard compressed sensing formulations based on discretization of the multipath channel parameter space degrade significantly when the actual channel parameters deviate from the assumed discrete set of values. To alleviate this off-grid problem, we make use of the particle swarm optimization (PSO) to perturb each grid point that reside in each multipath component cluster. Orthogonal matching pursuit (OMP) is used to reconstruct sparse multipath components in a greedy fashion. Extensive simulation results quantify the performance gain and robustness obtained by the proposed algorithm against the off-grid problem faced in sparse multipath channels. | [
"particle swarm optimization (pso)",
"orthogonal matching pursuit (omp)",
"compressed sensing (cs)",
"cross-ambiguity function (caf)",
"channel identification",
"sparse approximation"
] | [
"P",
"P",
"M",
"M",
"M",
"M"
] |
2:c3owN | Combining NEAT and PSO for learning tactical human behavior | This article presents and discusses a machine learning algorithm called PIGEON used to build agents capable of displaying tactical behavior in various domains. Such tactical behavior can be relevant in military simulations and video games, as well as in everyday tasks in the physical world, such as driving an automobile. Furthermore, PIGEON displays good performance across two different approaches to learning (observational and experiential) and across multiple domains. PIGEON is a hybrid algorithm, combining NEAT and PSO in two different manners. The investigation described in this paper compares the performance of the two versions of PIGEON to each other as well as to NEAT and to PSO individually. These four machine learning algorithms are applied in two different approaches to learningthrough observation of human performance and through experience, as well as in three distinct domain testbeds. The criteria used to compare them were high proficiency in task completion and rapid learning. Results indicate that overall, PIGEON worked best when NEAT and PSO are applied in an alternating manner. This combination was called PIGEON-Alternate, or simply Alternate. The two versions of the PIGEON algorithm, the tests conducted, the results obtained and the conclusions are described in detail. | [
"machine learning",
"neuroevolution",
"particle swarm optimization",
"tactical reasoning"
] | [
"P",
"U",
"U",
"M"
] |
4B&WrrF | Correction of segmented lung boundary for inclusion of pleural nodules and pulmonary vessels in chest CT images | We propose a new curvature-based method for correcting the segmented lung boundary. Our method consists of the following steps. First, the lungs are extracted from chest CT images by the automatic segmentation method. Second, the segmented lung contours are corrected by lung smoothing in each axial slice. Our scan line search provides an efficient contour tracing and curvature calculation. Finally, the smoothed lung contours are corrected by 3D VOI refinement. This increases the smoothness in the z z -axis without distortion of the lung boundary. Experimental results show that our method effectively incorporates the pleural nodules and pulmonary vessels into the segmentation results. | [
"ct",
"pleural nodule",
"pulmonary vessels",
"lung smoothing",
"contour tracing",
"curvature calculation",
"lung segmentation",
"airways"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"R",
"U"
] |
1ncXhKx | Automation of chamfering by an industrial robot; for the case of hole on free-curved surface | The study deals with the automatic chamfering for the case of hole on free-curved surface on the basis of CAD data, using an industrial robot. As a chamfering tool, a rotary-bar driven by an electric motor is mounted to the arm of the robot having six degrees-of-freedom in order to give an arbitrary position and attitude to the tool. The robot control command converted from the chamfering path is transmitted directly to the robot. From the experimental results, the system is found effective to remove a burr along the edge of a hole on a workpiece with free-curved surface. | [
"chamfering",
"industrial robot",
"free-curved surface",
"cad",
"cam"
] | [
"P",
"P",
"P",
"P",
"U"
] |
4UQLb21 | Reconstructing orthogonal polyhedra from putative vertex sets ? | In this paper we study the problem of reconstructing orthogonal polyhedra from a putative vertex set, i.e., we are given a set of points and want to find an orthogonal polyhedron for which this is the set of vertices. This is well-studied in 2D; we mostly focus on 3D, and on the case where the given set of points may be rotated beforehand. We obtain fast algorithms for reconstruction in the case where the answer must be orthogonally convex. | [
"reconstruction",
"orthogonal polyhedra",
"vertex set"
] | [
"P",
"P",
"P"
] |
1UC1CXk | affective expressions of machines | Emotions should play an important role in the design of interfaces because people interact with machines as if they were social actors [4]. We developed and tested a model for the convincingness of affective expressions, based on Fogg and Hsiang Tseng [3]. The empirical data did not support our original model. Furthermore, the experiment investigated if the type of emotion (happiness, sadness, anger, surprise, fear and disgust), knowledge about the source (human or machine), the level of abstraction (natural face, computer rendered face and matrix face) and medium of presentation (visual, audio/visual, audio) of an affective expression influences its convincingness and distinctness. Only the type of emotion and multimedia presentations had an effect on convincingness. The distinctness of an expression depends on the abstraction and the media through which it is presented. | [
"affective expressions",
"emotion",
"convincingness",
"abstraction",
"face",
"distinctness",
"speech",
"music",
"modality"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"U",
"U",
"U"
] |
-Sf9h8V | Hyperbolic Dirac Nets for medical decision support. Theory, methods, and comparison with Bayes Nets | We recently introduced the concept of a Hyperbolic Dirac Net (HDN) for medical inference on the grounds that, while the traditional Bayes Net (BN) is popular in medicine, it is not suited to that domain: there are many interdependencies such that any node can be ultimately conditional upon itself. A traditional BN is a directed acyclic graph by definition, while the HDN is a bidirectional general graph closer to a diffuse field of influence. Cycles require bidirectionality; the HDN uses a particular type of imaginary number from Dirac?s quantum mechanics to encode it. Comparison with the BN is made alongside a set of recipes for converting a given BN to an HDN, also adding cycles that do not usually require reiterative methods. This conversion is called the P-method. Conversion to cycles can sometimes be difficult, but more troubling was that the original BN had probabilities needing adjustment to satisfy realism alongside the important property called coherence. The more general and simpler K-method, not dependent on the BN, is usually (but not necessarily) derived by data mining, and is therefore also introduced. As discussed, BN developments may converge to an HDN-like concept, so it is reasonable to consider the HDN as a BN extension. | [
"hyperbolic",
"hyperbolic dirac net",
"dirac",
"bayes net",
"medical inference",
"decision support system",
"expert system",
"complex"
] | [
"P",
"P",
"P",
"P",
"P",
"M",
"U",
"U"
] |
-eXmqyH | on the relationship between novelty and popularity of user-generated content | This work deals with the task of predicting the popularity of user-generated content. We demonstrate how the novelty of newly published content plays an important role in affecting its popularity. We study three dimensions of novelty: contemporaneous novelty, self novelty , and discussion novelty . We demonstrate the contribution of the new novelty measures to estimating blog-post popularity by predicting the number of comments expected for a fresh post. We further demonstrate how novelty based measures can be utilized for predicting the citation volume of academic papers. | [
"novelty",
"popularity",
"user-generated content"
] | [
"P",
"P",
"P"
] |
2baYbpH | Capturing and rendering geometry details for BTF-mapped surfaces | Bidirectional texture functions, or BTFs, accurately model reflectance variation at a fine (meso-) scale as a function of lighting and viewing direction. BTFs also capture view-dependent visibility variation, also called masking or parallax, but only within surface contours. Mesostructure detail is neglected at silhouettes, so BTF-mapped objects retain the coarse shape of the underlying model. We augment BTF rendering to obtain approximate mesoscale silhouettes. Our new representation, the 4D mesostructure distance function (MDF), tabulates the displacement from a reference frame where a ray first intersects the mesoscale geometry beneath as a function of ray direction and ray position along that reference plane. Given an MDF, the mesostructure silhouette can be rendered with a per-pixel depth peeling process on graphics hardware, while shading and local parallax are handled by the BTF. Our approach allows real-time rendering, handles complex, non-height-field mesostructure, requires that no additional geometry be sent to the rasterizer other than the mesh triangles, is more compact than textured visibility representations used previously, and, for the first time, can be easily measured from physical samples. We also adapt the algorithm to capture detailed shadows cast both by and onto BTF-mapped surfaces. We demonstrate the efficiency of our algorithm on a variety of BTF data, including real data acquired using our BTF-MDF measurement system. | [
"rendering",
"bidirectional texture functions",
"reflectance and shading models",
"shadow algorithms",
"texture mapping"
] | [
"P",
"P",
"R",
"R",
"M"
] |
4SdsQeu | Towards understanding longitudinal collaboration networks: a case of mammography performance research | In this paper, we explore the longitudinal research collaboration network of mammography performance over 30years by creating and analysing a large collaboration network data using Scopus. The study of social networks using longitudinal data may provide new insights into how this collaborative research evolve over time as well as what type of actors influence the whole network in time. The methods and findings presented in this work aim to assist identifying key actors in other research collaboration networks. In doing so, we apply a rank aggregation technique to centrality measures in order to derive a single ranking of influential actors. We argue that there is a strong correlation between the level of degree and closeness centralities of an actor and its influence in the research collaboration network (at macro/country level). | [
"mammography performance",
"research collaboration network",
"longitudinal data",
"influential actors",
"social network analysis"
] | [
"P",
"P",
"P",
"P",
"M"
] |
2qA3fTU | GENETICALLY IMPROVED PRESEQUENCES FOR EUCLIDEAN TRAVELING SALESMAN PROBLEMS | The spacefilling curve (SFC) method of Bartholdy and Platzman is an extremely fast heuristic for the Euclidean Traveling Salesman Problem. The authors show how genetic search over a parametrized family of spacefilling curves can be used to improve the quality of the the tours generated by SFC. The computational effort required grows slowly as a function of problem size, and the tours obtained define robust presequences for repetitive problems in which only a subset of all cities will be present in any given problem instance. | [
"presequence",
"euclidean traveling salesman problem",
"spacefilling curve",
"genetic algorithm",
"problem space search"
] | [
"P",
"P",
"P",
"M",
"M"
] |
Q8A-Kmv | THE EXISTENCE OF LIMIT CYCLE FOR PERTURBED BILINEAR SYSTEMS | In this paper, the feedback control for a class of bilinear control systems with a small parameter is proposed to guarantee the existence of limit cycle. We use the perturbation method of seeking in approximate solution as a finite Taylor expansion of the exact solution. This perturbation method is to exploit the "smallness" of the perturbation parameter E to construct an approximate periodic solution. Furthermore, some simulation results are given to illustrate the existence of a limit cycle for this class of nonlinear control systems. | [
"limit cycle",
"perturbed bilinear system",
"feedback control"
] | [
"P",
"P",
"P"
] |
2mDHfB7 | Anesthesia with propofol slows atrial fibrillation dominant frequencies | The mechanisms responsible for the maintenance of atrial fibrillation (AF) are not completely understood yet. It has been demonstrated that AF can be modulated by several cardiac diseases, the autonomic nervous system and even drugs with purportedly no antiarrhythmic properties. We evaluated the effects of a widely used anaesthetic agent (propofol) in the fibrillation patterns. Spectral analysis was performed over atrial electrograms at baseline and immediately after a propofol bolus. Only after performing principal component analysis (PCA), we were able to significantly detect that propofol slows AF. (c) 2008 Elsevier Ltd. All rights reserved. | [
"atrial fibrillation",
"anaesthetic",
"principal component analysis (pca)"
] | [
"P",
"P",
"P"
] |
45UTkBh | Forest species recognition using macroscopic images | The recognition of forest species is a very challenging task that generally requires well-trained human specialists. However, few reach good accuracy in classification due to the time taken for their training; then they are not enough to meet the industry demands. Computer vision systems are a very interesting alternative for this case. The construction of a reliable classification system is not a trivial task, though. In the case of forest species, one must deal with the great intra-class variability and also the lack of a public available database for training and testing the classifiers. To cope with such a variability, in this work, we propose a two-level divide-and-conquer classification strategy where the image is first divided into several sub-images which are classified independently. In the lower level, all the decisions of the different classifiers, trained with different features, are combined through a fusion rule to generate a decision for the sub-image. The higher-level fusion combines all these partial decisions for the sub-images to produce a final decision. Besides the classification system we also extended our previous database, which now is composed of 41 species of Brazilian flora. It is available upon request for research purposes. A series of experiments show that the proposed strategy achieves compelling results. Compared to the best single classifier, which is a SVM trained with a texture-based feature set, the divide-and-conquer strategy improves the recognition rate in about 9 percentage points, while the mean improvement observed with SVMs trained on different descriptors was about 19 percentage points. The best recognition rate achieved in this work was 97.77%. | [
"textural descriptors",
"fusion of classifiers",
"two-level classification strategy",
"forest species classification"
] | [
"M",
"R",
"R",
"R"
] |
-DpwRKc | A novel design strategy for iterative learning and repetitive controllers of systems with a high modal density: Application to active noise control | This paper describes the application of a novel design strategy for iterative learning and repetitive controllers for systems with a high modal density, presented in the companion paper, on two experimental case studies. Both case studies are examples of active structural acoustic control, where the goal is to reduce the radiated noise using structural actuators. In the first case study, ILC is used to control punching noise. An electrodynamic actuator on the frame of the punching machine is driven by the ILC algorithm which takes advantage of the repetitiveness of the consecutive impacts to reduce noise radiation. In the second case study, an RC algorithm is used to control the noise radiated by rotating machinery, which is often mainly periodic. A piezoelectric actuator incorporated in the bearing is driven by the RC algorithm which is capable of reducing harmonics of the rotational frequency of the shaft. Both applications show the practical usefulness of the novel design strategy. | [
"repetitive control",
"iterative learning control",
"high number of degrees of freedom"
] | [
"P",
"R",
"M"
] |
1&fGtP4 | A feature-based prototype system for the evaluation and optimisation of manufacturing processes | The aim of this research work is to develop an intelligent design environment that enables designers to incorporate all product and process related activities into the design phase at early stages of the design process. One of the most important aspects of these activities is evaluation and optimisation of manufacturing processes. This research article focuses on developing a prototype system for manufacturing process optimisation using a combination of mathematical methods and constraint-programming techniques. This approach enables designers to evaluate and optimise feasible manufacturing processes as early as possible during the design session. This helps to avoid unexpected design iterations that cause wastage of a great amount of engineering time and effort, hence longer lead-time. The development process has passed through the following stages: firstly, the development of an intelligent design system for manufacturing process -optimisation, secondly, representation of product features, processes, cost, time and constraints; thirdly, developing the process optimisation rules for the selection of feasible processes for form features, finally, a user interface that provides designer with feedback about process selection and evaluation. (C) 1999 Elsevier Science Ltd. All rights reserved. | [
"process optimisation",
"concurrent engineering",
"cost estimation",
"knowledge-based systems",
"feature-based design"
] | [
"P",
"M",
"M",
"M",
"R"
] |
wnKRBYj | Investigation of naphthalene bisimide derivatives/gold interfaces: The influence of alkylthienyl groups in N-substituents on the energy levels | The type of N-substituents affects the energetics at the interface with Au. The organic semiconductors seem to form uniform films on the substrate. Alkylthienyl N-substituted naphthalene bisimides show smaller injection barriers. | [
"naphthalene bisimide derivatives",
"interfaces",
"au",
"injection barriers",
"xps",
"ups",
"alkythienyl"
] | [
"P",
"P",
"P",
"P",
"U",
"U",
"U"
] |
4Dr1Zsu | Comparing computer-supported dynamic modeling and paper & pencil' concept mapping technique in students' collaborative activity | This study aims at highlighting the collaborative activity of two high school students (age 14) in the cases of modeling the complex biological process of plant growth with two different tools: the 'paper & pencil' concept mapping technique and the computer-supported educational environment 'Models Creator'. Students' shared activity in both cases is carried out in the presence of a facilitator providing technical as well as cognitive support when necessary. The objective of the study is to highlight the ways in which the collaborating students are engaged in the plant growth modeling activity in the two cases and also identify the activity's similar and different aspects in each one. Our analysis is carried out on two complementary axes, the first of which concerns the process of collaboratively creating a plant growth model with each different tool, while the second has to do with the students' conceptualizations of the biological aspect of the modeling task in each case. A two-level analytic tool for the modeling process has been derived within the theoretical framework of 'activity theory' on the basis of the OCAF scheme for basic modeling operations and the scheme of Stratford et al. [Stratford, S. J., Krajcik, J., & Soloway, E. (1998). Secondary students' dynamic modeling processes: analyzing, reasoning about, synthesizing, and testing models of stream ecosystems. Journal of Science Education and Technology, 7(3), 215-234.] for higher-order modeling actions. According to our results, four major modeling actions (analysis, synthesis, testing- interpreting, technical and cognitive support) performed through a plethora of modeling operations define the steps of the modeling process in both cases, while specific qualitative differences can be actually identified. Finally, the students' conceptualizations of the biological aspect of the modeling task in the two-case activity is analyzed in regard with their capability of shifting reasoning between macro- and micro-levels, while educational implications are also discussed. (C) 2006 Elsevier Ltd. All rights reserved. | [
"applications in subject areas",
"cooperative/collaborative learning",
"interactive learning environments",
"secondary education",
"teaching/learning strategies"
] | [
"M",
"M",
"M",
"R",
"U"
] |
--wU8fU | Fleet assignment and routing with schedule synchronization constraints | This paper introduces a new type of constraints, related to schedule synchronization, in the problem formulation of aircraft fleet assignment and routing problems and it proposes an optimal solution approach. This approach is based on DantzigWolfe decomposition/column generation. The resulting master problem consists of flight covering constraints, as in usual applications, and of schedule synchronization constraints. The corresponding subproblem is a shortest path problem with time windows and linear costs on the time variables and it is solved by an optimal dynamic programming algorithm. This column generation procedure is embedded into a branch and bound scheme to obtain integer solutions. A dedicated branching scheme was devised in this paper where the branching decisions are imposed on the time variables. Computational experiments were conducted using weekly fleet routing and scheduling problem data coming from an European airline. The test problems are solved to optimality. A detailed result analysis highlights the advantages of this approach: an extremely short subproblem solution time and, after several improvements, a very efficient master problem solution time. | [
"routing",
"scheduling",
"dantzigwolfe decomposition",
"time windows",
"dynamic programming",
"branch and bound",
"weekly aircraft fleet assignment",
"air transportation"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"R",
"U"
] |
36KcDht | Wavelength assignment for multicast in all-optical WDM networks with splitting constraints | Multicast is an important application in all-optical WDM networks. The wavelength assignment problem for WDM multicast is to assign a set of wavelengths to the links of a given multicast tree. In an all-optical WDM network without wavelength conversions, wavelength assignment is the key to guarantee the quality of service and to reduce communication costs. In this paper, we study wavelength assignment for WDM multicast with two criteria, to cover the maximum number of destinations, and to minimize the wavelength costs. The computational complexity of the problem is studied. Three heuristic algorithms are proposed and the worst-case approximation ratios for some heuristic algorithms are given. We also derive a lower bound of the minimum total wavelength cost and an upper bound of the maximum number of reached destinations. The efficiency of the proposed heuristic algorithms and the effectiveness of the derived bounds are verified by the simulation results. | [
"wavelength assignment",
"wdm multicast",
"heuristics",
"np-complete"
] | [
"P",
"P",
"P",
"U"
] |
3ALAAkb | Cancer gene search with data-mining and genetic algorithms | Cancer leads to approximately 25% of all mortalities, making it the second leading cause of death in the United States. Early and accurate detection of cancer is critical to the well being of patients. Analysis of gene expression data leads to cancer identification and classification, which will facilitate proper treatment selection and drug development. Gene expression data sets for ovarian, prostate, and lung cancer were analyzed in this research. An integrated gene-search algorithm for genetic expression data analysis was proposed. This integrated algorithm involves a genetic algorithm and correlation-based heuristics for data preprocessing (on partitioned data sets) and data mining (decision tree and support vector machines algorithms) for making predictions. Knowledge derived by the proposed algorithm has high classification accuracy with the ability to identify the most significant genes. Bagging and stacking algorithms were applied to further enhance the classification accuracy. The results were compared with that reported in the literature. Mapping of genotype information to the phenotype parameters will ultimately reduce the cost and complexity of cancer detection and classification. | [
"genetic algorithm",
"lung cancer",
"genetic expression",
"integrated algorithm",
"data mining",
"gene selection",
"ovarian cancer",
"prostate cancer"
] | [
"P",
"P",
"P",
"P",
"P",
"R",
"R",
"R"
] |
heb6ddQ | Genetic algorithm for controllers in elevator groups: analysis and simulation during lunchpeak traffic | A genetic algorithm (GAHCA) is proposed to control elevator groups of professional buildings. The genetic algorithm is compared with the universal controller algorithm in industry applications. In order to do so an ARENA simulation scenario has been generated during heavy lunchpeak traffic conditions. The results allow us to affirm that our genetic algorithm reaches a better performance attending to the system waiting times than traditional duplex algorithms. (C) 2004 Elsevier B. V. All rights reserved. | [
"genetic algorithm",
"controller",
"elevator",
"simulation",
"lunchpeak",
"vertical traffic"
] | [
"P",
"P",
"P",
"P",
"P",
"M"
] |
H-TRfHG | An ultrasonic image evaluation system for assessing the severity of chronic liver disease | A quantitative ultrasonic image evaluation system that generates a numerical severity measurement to assess the progression of chronic liver disease and assist clinical diagnosis is proposed in this paper. The progression of chronic liver disease is closely related to the amount of fibrosis of the liver parenchyma under microscopic examination. The powerful index, computer morphometry (CM) score developed in Sun et al. [Sun YN, Horng MH, Lin XZ. Automatic computer morphometry system techniques and applications in medical diagnosis. In: Cornelius TL, editor. Computational methods in biophysics, biomaterials, biotechnology and medical systems. Algorithm development, mathematical analysis and diagnostics, vol. 4. Boston/Dordrecht/London: Kluwer Academic Publishers; 2003. p. 3350], accurately measures the fibrosis ratio of liver parenchyma from a microscopy of human liver specimens. Therefore, the results of the CM score of patients serves as an assessment basis for developing the disease measurement of the B-mode liver sonogram under echo-texture feature analysis methods. The radial basis function (RBF) network is used to establish the correlates between texture features of ultrasonic liver image and the corresponding CM score. The output of the RBF network is called the ultrasonic disease severity (UDS) score. The correct classification rate of 120 test images by using the UDS score is 92.5%. These promising results reveal that the UDS is capable of providing an important reference to diagnose chronic liver disease. | [
"ultrasonic liver image",
"ultrasonic scoring system",
"ultrasonic disease severity score",
"computer morphometry score",
"radial basis function network"
] | [
"P",
"R",
"R",
"R",
"R"
] |
J8P-B-Z | Several Classes of Even-Variable Balanced Boolean Functions with Optimal Algebraic Immunity | In this paper, we constructed six infinite classes of balanced Boolean functions. These six classes of Boolean functions achieved optimal algebraic degree, optimal algebraic immunity and high nonlinearity. Furthermore, we gave the proof of the lower bound of the nonlinearities of these balanced Boolean functions and proved the better lower bound of nonlinearity for Carlet-Feng's Boolean function. | [
"boolean function",
"optimal algebraic immunity",
"cryptography",
"non linearity"
] | [
"P",
"P",
"U",
"U"
] |
46:ZNW4 | A novel look-ahead optimization strategy for trie-based approximate string matching | This paper deals with the problem of estimating a transmitted string X * by processing the corresponding string Y, which is a noisy version of X *. We assume that Y contains substitution, insertion, and deletion errors, and that X * is an element of a finite (but possibly, large) dictionary, H. The best estimate X + of X *, is defined as that element of H which minimizes the generalized Levenshtein distance D(X, Y) between X and Y such that the total number of errors is not more than K, for all X ?H. The trie is a data structure that offers search costs that are independent of the document size. Tries also combine prefixes together, and so by using tries in approximate string matching we can utilize the information obtained in the process of evaluating any one D(X i , Y), to compute any other D(X j , Y), where X i and X j share a common prefix. In the artificial intelligence (AI) domain, branch and bound (BB) schemes are used when we want to prune paths that have costs above a certain threshold. These techniques have been applied to prune, for example, game trees. In this paper, we present a new BB pruning strategy that can be applied to dictionary-based approximate string matching when the dictionary is stored as a trie. The new strategy attempts to look ahead at each node, c, before moving further, by merely evaluating a certain local criterion at c. The search algorithm according to this pruning strategy will not traverse inside the subtrie(c) unless there is a hope of determining a suitable string in it. In other words, as opposed to the reported trie-based methods (Kashyap and Oommen in Inf Sci 23(2):123142, 1981; Shang and Merrettal in IEEE Trans Knowledge Data Eng 8(4):540547, 1996), the pruning is done a priori before even embarking on the edit distance computations. The new strategy depends highly on the variance of the lengths of the strings in H. It combines the advantages of partitioning the dictionary according to the string lengths, and the advantages gleaned by representing H using the trie data structure. The results demonstrate a marked improvement (up to 30% when costs are of a 0/1 form, and up to 47% when costs are general) with respect to the number of operations needed on three benchmark dictionaries. | [
"approximate string matching",
"pruning",
"trie-based syntactic pattern recognition",
"noisy syntactic recognition using tries",
"branch and bound techniques"
] | [
"P",
"P",
"M",
"M",
"R"
] |
-8UMy7P | Fingerprint classification based on extraction and analysis of singularities and pseudo ridges | In this paper, we introduce a new approach to fingerprint classification based on extraction and analysis of both singularities and traced pseudo ridges relating to singular points. Because of the image quality, it is difficult to get the correct number and position of the singularities that are widely used in current structural classification methods. With the help of pseudo ridge tracing and analysis of the traced curves, our method does not rely on the extraction of the exact number and positions of the true singular point(s), thus improving the classification accuracy. This method has been tested on the NIST special fingerprint database 4. For the 4000 images in this database, the classification accuracy reaches 92.7% for the 4-class problem. | [
"fingerprint classification",
"pseudo ridge",
"biometrics",
"singular point detection"
] | [
"P",
"P",
"U",
"M"
] |
NuhShuH | h-graphs: A new representation for tree decompositions of graphs ? | h-graphs, a new representation for tree decompositions of constraints graph is presented. h-graphs explicitly capture construction steps dependencies in a tree decomposition. An application to speed up computing feasibility ranges for constraint parameters is described. | [
"constraint graphs",
"construction steps dependencies",
"parametric solid modeling",
"geometric constraint solving",
"tree-decompositions",
"parameter ranges"
] | [
"P",
"P",
"U",
"M",
"U",
"R"
] |
4aBFtGr | Effect Mechanisms of Perceptual Congruence Between Information Systems Professionals and Users on Satisfaction with Service | With the proliferation of available electronic service channels for information systems (IS) users such as mobile or intranet services in companies, service interactions between IS users and IS professionals have become an increasingly important factor for organizational business-IT alignment. Despite the increasing relevance of such interactions, the implications of agreement or disagreement on the fulfillment of critical service quality factors for successful alignment and higher user satisfaction are far from being well understood. While prior research has extensively studied the question of matching different viewpoints on IS service quality in organizations, little or no attention has been paid to the role of perceptual congruence or incongruence in the dyadic relationship between IS professionals and users in forming user satisfaction with the IS function. Drawing on cognitive dissonance theory, prospect theory, and perceptual congruence research, this study examines survey responses from 169 matching pairs of IS professionals and users in different organizations and explains how perceptual fit patterns affect user satisfaction with the IS function. The paper demonstrates that perceptual congruence can, in and of itself, have an impact on user satisfaction, which goes beyond what was found before. Moreover, the results of the study reveal the relevance of nonlinear and asymmetric effect mechanisms arising from perceptual (in)congruence that may affect user satisfaction. This study extends our theoretical understanding of the role of perceptual alignment or misalignment on IS service quality factors in forming user satisfaction and lays the foundation for further study of the interplay between perceptions in the dyadic relationship between IS professionals and IS users. Managers who seek to encourage particular behaviors by the IS professionals or IS users may use the results of this study to reconcile the often troubled business-IT relationship. | [
"perceptual congruence",
"alignment",
"is service quality",
"polynomial modeling",
"response surface analysis",
"servqual"
] | [
"P",
"P",
"P",
"U",
"M",
"U"
] |
4jE1gJL | Application of runaway reaction mechanism generation to predict and control reactive hazards | Many industrial incidents are caused by thermal runaway reactions. Therefore, a good understanding of runaway reactions is necessary to predict and control reactive hazards. A detailed kinetic modeling approach is proposed to simulate runaway reactions under industrial conditions. This paper addresses the first step of this approach-mechanism generation. Computational chemistry was employed to estimate thermodynamic properties of reactants, intermediates, and products, and the Evans-Polanyi linear free energy relationship was used to estimate activation barriers of elementary reactions. To illustrate this mechanism generation approach, hydroxylamine is used as an example. The distribution of the predicted final products agrees with experimental results. (c) 2006 Elsevier Ltd. All rights reserved. | [
"runaway reaction",
"mechanism generation",
"evans-polanyi",
"hydroxylamine",
"quantum chemistry"
] | [
"P",
"P",
"P",
"P",
"M"
] |
-PgRXP7 | Modeling WDM Wavelength Switching Systems for Use in GMPLS and Automated Path Computation | Network control planes have made an implicit assumption that the switching devices in a network are symmetric. In wavelength-switched optical networks even the most basic switching element, the reconfigurable add-drop multiplexer, is highly asymmetric. This paper presents a model of optical switching subsystems for use in generalized multi-protocol label switching (GMPLS), route selection, and wavelength assignment. The model covers a large class of switching subsystems without internal wavelength converters. The model is applied to a number of common optical technologies, and a compact encoding for use in the optical control plane is furnished along with a method for deriving a simplified graph representation. | [
"networks, assignment and routing algorithms",
"networks, wavelength assignment",
"networks, wavelength routing"
] | [
"M",
"R",
"R"
] |
v&sexRW | A methodology for designing information security feedback based on User Interface Patterns | A methodology is provided here to assist in the design of secure interactive applications. In particular, this methodology helps design an adequate security information feedback based on User Interface Patterns, the resulting feedback is then evaluated against a set of design/evaluation criteria called HumanComputer Interaction for Security (HCI-S). In case of a security issue the security information feedback is generally presented using the visual and auditory channels required to achieve an effective notifications, and it is explicitly specified in the design of user interfaces for secure web system. | [
"user interface patterns",
"security information feedback",
"design patterns",
"heuristic evaluation",
"trust",
"usability",
"user-centered design"
] | [
"P",
"P",
"R",
"M",
"U",
"U",
"M"
] |
2hfSEsG | Estimation of stability and accuracy of inverse problem solution for the vocal tract | The inverse problem for the vocal tract is under consideration from the viewpoint of the ill-posed problem theory. The proposed approach, which permits overcoming the difficulties related to ambiguity and instability, is based on the variational regularization with constraints. The work of articulators is used as a functional of regularization and a criterion of optimality for finding an approximate solution. The measured acoustical parameters of the speech signal serve as external constraints while the geometry of the vocal tract, the mechanics of the articulation, and the phonetic properties of the language play the role of internal constraints. An effective numerical implementation of the proposed approach is based on a local piecewise linear approximation of the articulatory-to-acoustics mapping and a polynomial approximation of the discrepancy measure. A heuristic method named the calibrating curves method is applied for estimating the accuracy of the obtained approximate solution. It was shown that in some cases the error of the inverse problem solution is weakly dependent on the errors of formant frequency measurements. The vocal tract shapes obtained by virtue of the proposed approach are very close to those measured in X-ray experiments. | [
"inverse problem",
"speech",
"calibrating curves",
"solution accuracy"
] | [
"P",
"P",
"P",
"R"
] |
2gn7DP8 | Towards a systemic formalisation of interoperability | Interoperability has been mainly approached from an IT point of view or enterprise collaboration perspective. This paper aims at contributing to develop a science base for interoperability by studying interoperability on the basis of the system theory. The main contribution is to propose a formalisation of interoperability grounded in the general system theory: the Ontology of Interoperability (OoI). OoI provides a meta-model for ontological descriptions of systems, problems and solutions, which can then be inferred for a computer-aided interoperability diagnosis and problem solving. Related concepts definitions as well as a systemic model and a decisional model are given and discussed. Based on the Framework for Enterprise Interoperability (CEN/ISO 11354), the specialisation of the OoI to the enterprise domain is discussed. A case example is presented to illustrate the proposed approach. (C) 2009 Elsevier B.V. All rights reserved. | [
"interoperability",
"general system theory",
"ontology",
"conceptual modelling",
"problem-solving"
] | [
"P",
"P",
"P",
"M",
"U"
] |
4L-GS4n | Application and comparison of computational intelligence techniques for optimal location and parameter setting of UPFC | Unified power flow controller (UPFC) is one of the most effective flexible AC transmission systems (FACTS) devices for enhancing power system security. However, to what extent the performance of UPFC can be brought out, it highly depends upon the location and parameter setting of this device in the system. This paper presents a new approach based on computational intelligence (CI) techniques to find out the optimal placement and parameter setting of UPFC for enhancing power system security under single line contingencies (N?1 contingency). Firstly, a contingency analysis and ranking process to determine the most severe line outage contingencies, considering lines overload and bus voltage limit violations as a performance index, is performed. Secondly, a relatively new evolutionary optimization technique, namely: differential evolution (DE) technique is applied to find out the optimal location and parameter setting of UPFC under the determined contingency scenarios. To verify our proposed approach and for comparison purposes, simulations are performed on an IEEE 14-bus and an IEEE 30-bus power systems. The results, we have obtained, indicate that DE is an easy to use, fast, robust and powerful optimization technique compared with genetic algorithm (GA) and particle swarm optimization (PSO). Installing UPFC in the optimal location determined by DE can significantly enhance the security of power system by eliminating or minimizing the number of overloaded lines and the bus voltage limit violations. | [
"unified power flow controller (upfc)",
"power flow",
"contingency analysis",
"differential evolution (de)",
"genetic algorithm (ga)",
"particle swarm optimization (pso)"
] | [
"P",
"P",
"P",
"P",
"P",
"P"
] |
4XTfuDS | FCLOS: A client-server architecture for mobile OLAP | Mobile online analytical processing (mOLAP) encompasses all necessary technologies for information systems that enable OLAP data access to users carrying a mobile device. This paper presents FCLOS, a complete client-server architecture explicitly designed for mOLAP. FCLOS founds on intelligent scheduling and compressed transmissions in order to become a query efficient, self-adaptive and scalable mOLAP information system. Scheduling exploits derivability between data cubes in order to group related queries and eventually reduce the necessary transmissions (broadcasts). Compression is achieved by the m-Dwarf, a novel, compressed data cube physical structure, which has no loss of semantic information and is explicitly designed for mobile applications. The superiority of FCLOS against state of the art systems is shown both experimentally and analytically. (c) 2008 Elsevier B.V. All rights reserved. | [
"molap",
"broadcast",
"subsumption"
] | [
"P",
"P",
"U"
] |
2msfrc8 | Fuzzy target tracking control of autonomous mobile robots by using infrared sensors | The theme of this paper is to design a real-time fuzzy target tracking control scheme for autonomous mobile robots by using infrared sensors. At first two mobile robots are setup in the target tracking problelom, where one is the target mobile robot with infrared transmitters and the other one is the tracker mobile robot with infrared receivers and reflective sensors. The former is designed to drive in a specific trajectory. The latter is designed to track the target mobile robot. Then we address the design of the fuzzy target tracking control unit, which consists of a behavior network and a gate network. The behavior network possesses the fuzzy wall following control (FWFC) mode, fuzzy target tracking control (FTTC) mode, and two fixed control modes to deal with different situations in real applications. Both the FWFC and FTTC are realized by the fuzzy sliding-mode control scheme. A gate network is used to address the fusion of measurements of two infrared sensors and is developed to recognize which situation is belonged to and which action should be executed. Moreover, the target tracking control with obstacle avoidance is also investigated in this paper. Both computer simulations and real-time implementation experiments of autonomous target tracking control demonstrate the effectiveness and feasibility of the proposed control schemes. | [
"fuzzy target tracking control",
"autonomous mobile robot",
"behavior network",
"gate network",
"fuzzy sliding-mode control"
] | [
"P",
"P",
"P",
"P",
"P"
] |
1Q81CrL | object identification in compressed view-dependent multiresolution meshes | We present a method that allows to identify individual objects in a view-dependent multiresolution triangle mesh. Unlike previous methods, where the input mesh was considered a uniform "triangle soup", our method enables passing of object semantics through the process of encoding, transmission, and decoding. This information can be used on the client side to query additional data to a specific part of the mesh. Moreover, it allows a part of the mesh to be transformed (e. g., moved to a different location for better inspection) within the multiresolution framework of adaptive refinement. Finally we show that the overhead introduced by our method is negligible.The algorithm has been tested with artifically and manually created data and with models acquired within the digital archaeology project "3D MURALE". | [
"compression",
"multiresolution",
"modeling",
"meta-data"
] | [
"P",
"P",
"P",
"U"
] |
3TM9P1e | Learning with personalized recommender systems: A psychological view | This paper explores the potentials of recommender systems for learning from a psychological point of view. It is argued that main features of recommender systems (collective responsibility, collective intelligence, user control, guidance, personalization) fit very well to principles in the learning sciences. However, recommender systems should not be transferred from commercial to educational contexts on a one-to-one basis, but rather need adaptations in order to facilitate learning. Potential adaptations are discussed both with regard to learners as recipients of information and learners as producers of data. Moreover, it is distinguished between system-centered adaptations that enable proper functioning in educational contexts, and social adaptations that address typical information processing biases. Implications for the design of educational recommender systems and for research on educational recommender systems are discussed. | [
"learning",
"recommender systems"
] | [
"P",
"P"
] |
47mfCb6 | Using Multivariate Adaptive Regression Splines in the Construction of Simulated Soccer Team's Behavior Models | In soccer, like in other collective sports, although players try to hide their strategy, it is always possible, with a careful analysis, to detect it and to construct a model that characterizes their behavior throughout the game phases. These findings are extremely relevant for a soccer coach, in order not only to evaluate the performance of his athletes, but also for the construction of the opponent team model for the next match. During a soccer match, due to the presence of a complex set of intercorrelated variables, the detection of a small set of factors that directly influence the final result becomes almost an impossible task for a human being. In consequence of that, a huge number of software packages for analysis capable of calculating a vast set of game statistics appeared over the years. However, all of them need a soccer expert in order to interpret the produced data and select which are the most relevant variables. Having as a base a set of statistics extracted from the RoboCup 2D Simulation League log files and using a multivariable analysis, the aim of this research project is to identify which are the variables that most influence the final game result and create prediction models capable of automatically detecting soccer team behaviors. For those purposes, more than two hundred games (from 2006-2009 competition years) were analyzed according to a set of variables defined by a soccer experts board, and using the MARS and RReliefF algorithms. The obtained results show that the MARS algorithm presents a lower error value, when compared to RReliefF (from a pairwire t-test for a significance level of 5%). The p-value for this test was 2.2e-16 which means these two techniques present a significant statistical difference for this data. In the future, this work will be used in an offline analysis module, with the goal of detecting which is the team strategy that will maximize the final game result against a specific opponent. | [
"knowledge discovery from historical data",
"data mining",
"feature selection",
"soccer simulation"
] | [
"M",
"M",
"M",
"R"
] |
-vhz8aE | Numerical solution of the 'classical' Boussinesq system | We consider the 'classical' Boussinesq system of water wave theory, which belongs to the class of Boussinesq systems modelling two-way propagation of long waves of small amplitude on the surface of water in a horizontal channel. (We also consider its completely symmetric analog.) We discretize the initial-boundary-value problem for these systems, corresponding to homogeneous Difichlet boundary conditions on the velocity variable at the endpoints of a finite interval, using fully discrete Galerkin-finite element methods of high accuracy. We use the numerical schemes as exploratory tools to study the propagation and interactions of solitary-wave solutions of these systems, as well as other properties of their solutions. (C) 2011 IMACS. Published by Elsevier B.V. All rights reserved. | [
"'classical' boussinesq systems",
"water waves",
"initial-boundary-value problems",
"fully discrete galerkin-finite element methods",
"solitary waves"
] | [
"P",
"P",
"P",
"P",
"M"
] |
6SEc7-2 | Leader neurons in leaky integrate and fire neural network simulations | In this paper, we highlight the topological properties of leader neurons whose existence is an experimental fact. Several experimental studies show the existence of leader neurons in population bursts of activity in 2D living neural networks (Eytan and Marom, J Neurosci 26(33):84658476, 2006; Eckmann et al., New J Phys 10(015011), 2008). A leader neuron is defined as a neuron which fires at the beginning of a burst (respectively network spike) more often than we expect by chance considering its mean firing rate. This means that leader neurons have some burst triggering power beyond a chance-level statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model (Gerstner and Kistler 2002; Cessac, J Math Biol 56(3):311345, 2008), which allows fast simulations (Izhikevich, IEEE Trans Neural Netw 15(5):10631070, 2004; Gerstner and Naud, Science 326:379380, 2009). The dynamics of our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themselves. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends signals to many excitatory neurons as well as to few inhibitory neurons and a leader neuron receives only signals from few other excitatory neurons. Our linear analysis exhibits five essential properties of leader neurons each with different relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of predicting which neuron is a good leader neuron and which is not. Our prediction formula correctly assesses leadership for at least ninety percent of neurons. | [
"leader",
"neuron",
"integrate and fire",
"simulation",
"burst",
"model"
] | [
"P",
"P",
"P",
"P",
"P",
"P"
] |
PUJVUHP | Exploring a coarse-grained distributive strategy for finite-difference Poisson-Boltzmann calculations | We have implemented and evaluated a coarse-grained distributive method for finite-difference Poisson-Boltzmann (FDPB) calculations of large biomolecular systems. This method is based on the electrostatic focusing principle of decomposing a large fine-grid FDPB calculation into multiple independent FDPB calculations, each of which focuses on only a small and a specific portion (block) of the large fine grid. We first analyzed the impact of the focusing approximation upon the accuracy of the numerical reaction field energies and found that a reasonable relative accuracy of 10(-3) can be achieved when the buffering space is set to be 16 grid points and the block dimension is set to be at least (1/6)(3) of the fine-grid dimension, as in the one-block focusing method. The impact upon efficiency of the use of buffering space to maintain enough accuracy was also studied. It was found that an "optimal" multi-block dimension exists for a given computer hardware setup, and this dimension is more or less independent of the solute geometries. A parallel version of thedistributive focusing method was also implemented. Given the proper settings, the distributive method was able to achieve respectable parallel efficiency with tested biomolecular systems on a loosely connected computer cluster. | [
"poisson-boltzmann",
"electrostatic focusing",
"finite difference",
"distributive computing",
"domain decomposition"
] | [
"P",
"P",
"U",
"R",
"U"
] |
3zrnjVk | Exploring user emotion in microblogs for music recommendation ? | Utilize microblogs to extract users' emotions. Correlate users, music and the users' emotion. Develop an emotion-aware method to perform music recommendation. | [
"music recommendation",
"emotion-aware",
"emotion analysis",
"song-document association"
] | [
"P",
"P",
"M",
"U"
] |
-8wcwwn | Evolutionary compact embedding for large-scale image classification | Effective dimensionality reduction is a classical research area for many large-scale analysis tasks in computer vision. Several recent methods attempt to learn either graph embedding or binary hashing for fast and accurate applications. In this paper, we propose a novel framework to automatically learn the task-specific compact coding, called evolutionary compact embedding (ECE), which can be regarded as an optimization algorithm combining genetic programming (GP) and a boosting trick. As an evolutionary computation methodology, GP can solve problems inspired by natural evolution without any prior knowledge of the solutions. In our evolutionary architecture, each bit of ECE is iteratively computed using a binary classification function, which is generated through GP evolving by jointly minimizing its empirical risk with the AdaBoost strategy on a training set. We address this as greedy optimization leading to small Hamming distances for similar samples and large distances for dissimilar samples. We then evaluate ECE on four image datasets: USPS digital hand-writing, CMU PIE face, CIFAR-10 tiny image and SUN397 scene, showing the accurate and robust performance of our method for large-scale image classification. | [
"evolutionary compact embedding",
"large-scale image classification",
"dimensionality reduction",
"genetic programming",
"adaboost"
] | [
"P",
"P",
"P",
"P",
"P"
] |
17Yjpa8 | PLUMED-GUI: An environment for the interactive development of molecular dynamics analysis and biasing scripts ? | PLUMED-GUI is an interactive environment to develop and test complex PLUMED scripts within the Visual Molecular Dynamics (VMD) environment. Computational biophysicists can take advantage of both PLUMEDs rich syntax to define collective variables (CVs) and VMDs chemically-aware atom selection language, while working within a natural point-and-click interface. Pre-defined templates and syntax mnemonics facilitate the definition of well-known reaction coordinates. Complex CVs, e.g.involving reference snapshots used for RMSD or native contacts calculations, can be built through dialogs that provide a synoptic view of the available options. Scripts can be either exported for use in simulation programs, or evaluated on the currently loaded molecular trajectories. Script development takes place without leaving VMD, thus enabling an incremental tryseemodify development model for molecular metrics. Program title: PLUMED-GUI (Collective variable analysis plugin) Catalogue identifier: AERU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERU_v1_0.html Program obtainable from: CPC Program Library, Queens University, Belfast, N. Ireland Licensing provisions: 3-clause BSD Open Source No. of lines in distributed program, including test data, etc.: 2651 No. of bytes in distributed program, including test data, etc.: 32359 Distribution format: tar.gz Programming language: TCL/TK. Computer: Workstations, PCs. Operating system: Linux/Unix, OSX, Windows. RAM: Sufficient to run PLUMED [1] and VMD [2]. Classification: 3, 23. Subprograms used: Compute and visualize values of collective variables on molecular dynamics trajectories from within VMD, and interactively develop biasing scripts for the estimation of free-energy surfaces in PLUMED. Solution method: A graphical user interface is integrated in VMD and allows the user to interactively develop and run analysis scripts. Menus and dialogs provide mnemonics and documentation on the syntax to define complex CVs. Restrictions: Tested on systems up to 100,000 atoms. Unusual features: VMDPLUMED is not a standalone program but a plugin that provides access to PLUMEDs analysis features from within VMD. Additional comments: Distributed with VMD since version 1.9.0. Manual update may be required to access the latest features. Running time: Computations of the values of collective variables, performed by the underlying PLUMED code, depends on the size of the system and the length of the trajectory; it is generally negligible with respect to simulation time. References: G. A. Tribello, M. Bonomi, D. Branduardi, C. Camilloni, G. Bussi, PLUMED 2: New feathers for an old bird, Computer Physics Communications 185 (2014) 604. W. Humphrey, A. Dalke, K. Schulten, VMD: visual molecular dynamics, J Mol Graph 14 (1996) 3338. | [
"plumed",
"molecular dynamics",
"vmd",
"collective variables",
"graphical user interface",
"metadynamics"
] | [
"P",
"P",
"P",
"P",
"P",
"U"
] |
4awvRDE | Crowdsourcing tasks to social networks in BPEL4People | Human-interactions are a substantial part of today's business processes. In service-oriented systems this has led to specifications such as WS-HumanTask and BPEL4People which aim at standardizing the interaction protocol between software processes and humans. These specifications received considerable attention from major industry players due to their extensibility and interoperability. Recently, crowdsourcing has emerged as a new paradigm for leveraging a human workforce using Web technologies. We argue that crowdsourcing techniques and platforms could benefit from XML-based standards such as WS-HumanTask and BPEL4People as these specifications allow for extensibility and cross-platform operation. However, most efforts to model human interactions using BPEL4People focus on relatively static role models for selecting the right person to interact with. Thus, BPEL4People is not well suited for specifying and executing processes involving crowdsourcing of tasks to online communities. Here, we extend BPEL4People with non-functional properties that allow to cope with the inherent dynamics of crowdsourcing processes. Such properties include human capabilities and the level of skills. We discuss the formation of social networks that are particularly beneficial for processing extended BPEL4People tasks. Furthermore, we present novel approaches for the automated assignment of tasks to a social group. The feasibility of our approach is shown through a proof of concept implementation of various concepts as well as simulations and experiments to evaluate our ranking and selection approach. | [
"crowdsourcing",
"social networks",
"bpel4people",
"non-functional properties"
] | [
"P",
"P",
"P",
"P"
] |
36BU6w& | Chaos and bifurcation in a third-order digital phase-locked loop | Nonlinear dynamics of a third-order zero crossing digital phase locked loop (ZCDPLL) has been investigated. It has been observed that, while first and second-order ZCDPLLs show period doubling route to chaos, a third-order ZCDPLL manifests a disjoint periodic attractor in its route to chaos. Also, the complexity and predictability of the system dynamics have been characterized by using nonlinear dynamical measures such as Lyapunov exponent, KaplanYork dimension, correlation dimension and Kolmogorov entropy. All the results show that the chaos in a third-order ZCDPLL is low dimensional. | [
"chaos",
"bifurcation",
"digital phase locked loop",
"dynamical measures"
] | [
"P",
"P",
"P",
"P"
] |
LQCHqty | BALANCES AND ABELIAN COMPLEXITY OF A CERTAIN CLASS OF INFINITE TERNARY WORDS | A word u defined over an alphabet A is c-balanced (c is an element of N) if for all pairs of factors v, w of u of the same length and for all letters a. A, the difference between the number of letters a in v and w is less or equal to c. In this paper we consider a ternary alphabet A = {L, S, M} and a class of substitutions phi(p) defined by phi(p)(L) = L p S, phi(p)(S) = M, phi(p)(M) = L(p-1)S where p > 1. We prove that the fixed point of phi(p), formally written as phi(infinity)(p)(L), is 3-balanced and that its Abelian complexity is bounded above by the value 7, regardless of the value of p. We also show that both these bounds are optimal, i.e. they cannot be improved. | [
"abelian complexity",
"ternary word",
"substitution",
"balance property"
] | [
"P",
"P",
"P",
"M"
] |
1K2zFwV | Parallel computing in topology optimization of structures with stress constraints | We apply a minimum weight with stress constraints topology optimization formulation. Important drawbacks of conventional topology optimization formulations are avoided. The sensitivity analysis and the optimization algorithm are computed in parallel. OpenMP parallelization directives are used. Suitable speed-up is obtained in conventional multi-core computers. | [
"parallel computing",
"topology optimization",
"stress constraints",
"minimum weight",
"openmp"
] | [
"P",
"P",
"P",
"P",
"P"
] |
2fh9GR: | The extensions L-n* of the formal system L* and their completeness | In this paper, the method of well-combined semantics and syntax proposed by Pavelka is applied to the research of the propositional calculus formal system L*. The part of constant values are taken as formulas, formulas are fuzzified in two manners: semantically and syntactically, and inferring processes are fuzzified. A sequence of new extensions {L-n(*)} of the system L* is proposed, and the completeness of L-n* is proved. (C) 2002 Elsevier Science Inc. All rights reserved. | [
"extension",
"completeness",
"many-valued logic",
"fuzzy logic",
"propositional calculus system"
] | [
"P",
"P",
"U",
"U",
"R"
] |
2vh-iDv | Service Adaptability in Multimedia Wireless Networks | Next-generation wireless communication systems aim at supporting wireless multimedia services with different quality-of-service (QoS) and bandwidth requirements. Therefore, effective management of the limited radio resources is important to enhance the network performance. In this paper, we propose a QoS adaptive multimedia service framework for controlling the traffic in multimedia wireless networks (MWN) that enhances the current methods used in cellular environments. The proposed framework is designed to take advantage of the adaptive bandwidth allocation (ABA) algorithm with new calls in order to enhance the system utilization and blocking probability of new calls. The performance of our framework is compared to existing framework in the literature. Simulation results show that our QoS adaptive multimedia service framework outperforms the existing framework in terms of new call blocking probability, handoff call dropping probability, and bandwidth utilization. | [
"bandwidth adaptation",
"call admission control",
"quality of service",
"real-time multimedia traffic",
"wireless cellular networks"
] | [
"R",
"M",
"M",
"M",
"R"
] |
13gX9hY | Augmented reality in teaching of electrodynamics | Purpose - The purpose of this paper is to present an application of augmented reality (AR) in the context of teaching of electrodynamics. The AR visualization technique is applied to electromagnetic fields. Carrying out of numerical simulations as well as preparation of the AR display is shown. Presented examples demonstrate an application of this technique in teaching of electrodynamics. Design/methodology/approach - The 3D electromagnetic fields are computed with the finite element method (FEM) and visualized with an AR display. Findings - AR is a vivid method for visualization of electromagnetic fields. Students as well as experts can easily connect the characteristics of the fields with the physical object. Research limitations/implications - The focus of the presented work has been on an application of AR in a lecture room. Then, easy handling of a presentation among with low-hardware requirements is important. Practical implications - The presented approach is based on low-hardware requirements. Hence, a presentation of electromagnetic fields with AR in a lecture room can be easily done. AR helps students to understand electromagnetic field theory. Originality/value - Well-known methods like FEM and AR have been combined to develop a visualization technique for electromagnetic fields, which can be easily applied in a lecture room. | [
"teaching",
"electromagnetic fields",
"finite element analysis",
"integral equations",
"teaching aids"
] | [
"P",
"P",
"M",
"U",
"M"
] |
3e:kiaf | generic and automatic address configuration for data center networks | Data center networks encode locality and topology information into their server and switch addresses for performance and routing purposes. For this reason, the traditional address configuration protocols such as DHCP require huge amount of manual input, leaving them error-prone. In this paper, we present DAC, a generic and automatic Data center Address Configuration system. With an automatically generated blueprint which defines the connections of servers and switches labeled by logical IDs, e.g., IP addresses, DAC first learns the physical topology labeled by device IDs, e.g., MAC addresses. Then at the core of DAC is its device-to-logical ID mapping and malfunction detection. DAC makes an innovation in abstracting the device-to-logical ID mapping to the graph isomorphism problem, and solves it with low time-complexity by leveraging the attributes of data center network topologies. Its malfunction detection scheme detects errors such as device and link failures and miswirings, including the most difficult case where miswirings do not cause any node degree change. We have evaluated DAC via simulation, implementation and experiments. Our simulation results show that DAC can accurately find all the hardest-to-detect malfunctions and can autoconfigure a large data center with 3.8 million devices in 46 seconds. In our implementation, we successfully autoconfigure a small 64-server BCube network within 300 milliseconds and show that DAC is a viable solution for data center autoconfiguration. | [
"address configuration",
"data center networks",
"graph isomorphism"
] | [
"P",
"P",
"P"
] |