id
stringlengths
7
7
title
stringlengths
3
578
abstract
stringlengths
0
16.7k
keyphrases
sequence
prmu
sequence
18JShvf
A neural network based framework for directional primitive extraction
This paper describes a computational framework for the extraction of low-level directional primitives in images. The system is divided in two stages. The first one consists of the low level directional primitive extraction, through the Gabor wavelet decomposition. The second one consists of the reduction of the high dimensionality of the Gabor decomposition results by means of auto-organised structures. The main advantages of the system introduced are two: it provides accurate and reliable information, and it produces good results on different image types without intervention of the final user. These advantages will be demonstrated by comparing our system with a classical edge detector.
[ "gabor wavelets", "self-organised structures", "growing cell structures", "growing neural gas", "chromaticity diagram" ]
[ "P", "M", "M", "M", "U" ]
3Fzg6kV
Accessibility barriers for users of screen readers in the Moodle learning content management system
In recent decades, the use of the Internet has spread rapidly into diverse social spheres including that of education. Currently, most educational centers make use of e-learning environments created through authoring tool applications like learning content management systems (LCMSs). However, most of these applications currently present accessibility barriers that make the creation of accessible e-learning environments difficult for teachers and administrators. In this paper, the accessibility of the Moodle authoring tool, one of the most frequently used LCMSs worldwide, is evaluated. More specifically, the evaluation is carried out from the perspective of two visually impaired users accessing content through screen readers, as well as a heuristic evaluation considering the World Wide Web Consortiums Authoring Tool Accessibility Guidelines. The evaluation results demonstrate that Moodle presents barriers for screen reader users, limiting their ability to access the tool. One example of accessibility problems for visually impaired users is the frequent inability to publish learning contents without assistance. In light of these results, the paper offers recommendations that can be followed to reduce or eliminate these accessibility barriers.
[ "accessibility", "authoring tool", "screen reader users", "atag", "lcms" ]
[ "P", "P", "P", "U", "U" ]
3&2DKjv
Defeasible time-stepping
Some physical systems need to be modeled for simulation so that they evolve in time in both a time-driven (time-stepped) and an event-driven manner. An example is a system of colliding particles that move in a dynamically changing potential field. Such behavior can be observed in the modeling of real physical systems such as nuclear collisions and the transport of neutrinos in a supernova explosion. We have developed a distributed algorithm for the optimistic simulation of systems that evolve according to time steps of known duration, but also require simulation time to evolve in an event-driven manner within each step, in order to reproduce the occurrence of discrete events. It is then a hybrid simulation algorithm, since it synchronizes the computation according to both the time- and event-driven aspects of the physical system model. One of the main characteristics of our algorithm, which we call defeasible time-stepping (DTS), is that its time-stepping portion is also subject to revision by rollback. This paper introduces DTS and its properties, and contains performance figures for an implementation of the algorithm when applied to the simulation of a nuclear physics problem on an Intel iPSC/860. We also show that DTS imposes an upper bound on the difference of clock values of the various logical processes that participate in the simulation. This upper bound grows linearly with the diameter of the directed graph underlying the physical system model, and can be calibrated by adjusting a proportionality constant. As a consequence, DTS can also be viewed as a mechanism for limiting optimism in strictly event-driven simulations.
[ "distributed simulation", "hybrid time evolution", "optimism limitation" ]
[ "R", "M", "R" ]
4Y-YyzB
Likelihood-free parallel tempering
Approximate Bayesian Computational (ABC) methods, or likelihood-free methods, have appeared in the past fifteen years as useful methods to perform Bayesian analysis when the likelihood is analytically or computationally intractable. Several ABC methods have been proposed: MCMC methods have been developed by Marjoram et al. (2003) and by Bortot et al. (2007) for instance, and sequential methods have been proposed among others by Sisson et al. (2007), Beaumont et al. (2009) and Del Moral et al. (2012). Recently, sequential ABC methods have appeared as an alternative to ABC-PMC methods (see for instance McKinley et al., 2009; Sisson et al., 2007). In this paper a new algorithm combining population-based MCMC methods with ABC requirements is proposed, using an analogy with the parallel tempering algorithm (Geyer 1991). Performance is compared with existing ABC algorithms on simulations and on a real example.
[ "likelihood-free", "parallel tempering", "approximated bayesian computational", "population-based", "intractable likelihood", "monte carlo markov chain" ]
[ "P", "P", "P", "P", "R", "U" ]
4yt8bbH
Parametric study of a sprocket system during heat-treatment process
The entire heat-treatment process, including induction heating followed by oil quenching, for sprockets made of S45C mid-carbon steel has been systematically analysed. An attempt is made to investigate the effect of geometrical parameters, such as cutout length and height, inner diameter and number of teeth, on the distortion of sprockets by using numerical simulations. In the heat-treatment model, phase transformation and plasticity are considered and temperature-dependent material properties are incorporated. The simulation results of the deformation of the sprockets after heat-treatment show a good agreement with experimental data.
[ "distortion", "phase transformation", "plasticity", "heat treatment", "temperature dependence" ]
[ "P", "P", "P", "M", "U" ]
44Q6d2T
The infinite-dimensional widths and optimal recovery of generalized Besov classes
The purpose of the present paper is to consider some weak asymptotic problems concerning the infinite-dimensional Kolmogorov widths, the infinite-dimensional linear widths, the infinite-dimensional Gel'fand widths and optimal recovery in Besov space. It is obvious to be found that the results obtained and methods used in Besov space are easily generalized, and hence in this paper these results in an extension of Besov spaces on R-d are stated and proved. Meantime, the representation theorem and approximation of these spaces by polynomial splines are discussed. (C) 2002 Elsevier Science (USA).
[ "infinite-dimensional width", "optimal recovery", "generalized besov classes" ]
[ "P", "P", "P" ]
-FmnpZU
Localization-Based Radio Model Calibration for Fault-Tolerant Wireless Mesh Networks
Wireless mesh networks offer flexibility for industrial automation, but, in these environments with changing propagation conditions, it is challenging to guarantee the radio coverage and connectivity. This paper contributes a new localization-based method for the calibration of radio propagation models. The idea is to find the locations of the mobile stations via localization and to use radio signal strength measurements from them for adjusting the radio model parameters until the model better fits to the real environment. This calibration method is integrated in our previously published fault-tolerance framework for guaranteeing the availability of radio coverage and connectivity of wireless mesh networks. It is used to automatically detect environmental dynamics (errors) at run-time and to propose a network reconfiguration before they lead to service failures. An evaluation in a real industrial scenario shows the practicability of our approach.
[ "localization", "radio model calibration", "fault-tolerance", "radio coverage", "wireless networks" ]
[ "P", "P", "P", "P", "R" ]
5&CBPW9
P Systems with Endosomes
P Systems are computing devices inspired by the structure and the functioning of a living cell. A P System consists of a hierarchy of membranes, each of them containing a multiset of objects, a set of evolution rules, and possibly other membranes. Evolution rules are applied to the objects of the same membrane with maximal parallelism. In this paper we present an extension of P Systems, called P Systems with Endosomes (PE Systems), in which endosomes can be explicitly modeled. We show that PE Systems are universal even if only the simplest form of evolution rules is considered, and we give one application example.
[ "p systems", "endosomes", "pe systems" ]
[ "P", "P", "P" ]
17Ks8:t
Continuous-time relaxation labeling processes
We study the dynamical properties of two new relaxation labeling schemes described in terms of differential equations, and hence evolving in continuous time. This contrasts with the customary approach to defining relaxation labeling algorithms which prefers discrete time. Continuous-time dynamical systems are particularly attractive because they can be implemented directly in hardware circuitry, and the study of their dynamical properties is simpler and more elegant. They are also more plausible as models of biological visual computation. We prove that the proposed models enjoy exactly the same dynamical properties as the classical relaxation labeling schemes, and show how they are intimately related to Hummel and Zucker's now classical theory of constraint satisfaction. In particular, we prove that, when a certain symmetry condition is met, the dynamical systems behavior is governed by a Liapunov function which turns out to be (the negative of) a well-known consistency measure. Moreover, we prove that the fundamental dynamical properties of the systems are retained when the symmetry restriction is relaxed. We also analyze the properties of a simple discretization of the proposed dynamics, which is useful in digital computer implementations. Simulation results are presented which show the practical behavior of the models.
[ "relaxation labeling processes", "differential equations", "dynamical systems", "consistency" ]
[ "P", "P", "P", "P" ]
-UmCKDR
Intra Prediction Based on Markov Process Modeling of Images
In recent video coding standards, intraprediction of a block of pixels is performed by copying neighbor pixels of the block along an angular direction inside the block. Each block pixel is predicted from only one or few directionally aligned neighbor pixels of the block. Although this is a computationally efficient approach, it ignores potentially useful correlation of other neighbor pixels of the block. To use this correlation, a general linear prediction approach is proposed, where each block pixel is predicted using a weighted sum of all neighbor pixels of the block. The disadvantage of this approach is the increased complexity because of the large number of weights. In this paper, we propose an alternative approach to intraprediction, where we model image pixels with a Markov process. The Markov process model accounts for the ignored correlation in standard intraprediction methods, but uses few neighbor pixels and enables a computationally efficient recursive prediction algorithm. Compared with the general linear prediction approach that has a large number of independent weights, the Markov process modeling approach uses a much smaller number of independent parameters and thus offers significantly reduced memory or computation requirements, while achieving similar coding gains with offline computed parameters.
[ "markov processes", "video coding", "image coding", "prediction methods" ]
[ "P", "P", "R", "R" ]
4rrWBv3
Catalytic Oligonucleotides Targeting EGR-1 As Potential Inhibitors of In-Stent Restenosis
This brief review discusses recent strategies targeting the zinc finger transcription factor and immediate-early gene product Egr-1 with catalytic DNA in efforts to inhibit postangioplasty restenosis.
[ "catalytic oligonucleotides", "restenosis", "early growth response factor-i" ]
[ "P", "P", "U" ]
aiWH7KW
Synchronizing AMS Assertions with AMS Simulation: From Theory to Practice
The verification community anticipates the adoption of assertions in the Analog and Mixed-Signal (AMS) domain in the near future. Several questions need to be answered before AMS assertions are brought into practice, such as: (a) How will the languages for AMS assertions be different from the ones in the digital domain? (b) Does the analog simulator have to be assertion aware? (c) If so, then how and where on the time line will the AMS assertion checker synchronize with the analog simulator? and (d) What will be the performance penalty for monitoring AMS assertions accurately over analog simulation? This article attempts to answer these questions through theoretical analysis and empirical results obtained from industrial test cases. We study logics which extend Linear Temporal Logic (LTL) with predicates over real variables, and show that further extensions allowing the binding of real-valued variables across time makes the logic undecidable. We present a toolkit which can integrate with existing AMS simulators for checking AMS assertions on practical designs. We study the problem of synchronizing the AMS simulator with the AMS assertion checker and demonstrate the performance penalty of different synchronization options.
[ "simulation", "verification", "mixed-signal", "temporal logic", "satisfiability" ]
[ "P", "P", "P", "P", "U" ]
4kNufiB
rotational symmetry field design on surfaces
Designing rotational symmetries on surfaces is a necessary task for a wide variety of graphics applications, such as surface parameterization and remeshing, painterly rendering and pen-and-ink sketching, and texture synthesis. In these applications, the topology of a rotational symmetry field such as singularities and separatrices can have a direct impact on the quality of the results. In this paper, we present a design system that provides control over the topology of rotational symmetry fields on surfaces. As the foundation of our system, we provide comprehensive analysis for rotational symmetry fields on surfaces and present efficient algorithms to identify singularities and separatrices. We also describe design operations that allow a rotational symmetry field to be created and modified in an intuitive fashion by using the idea of basis fields and relaxation. In particular, we provide control over the topology of a rotational symmetry field by allowing the user to remove singularities from the field or to move them to more desirable locations. At the core of our analysis and design implementations is the observations that N -way rotational symmetries can be described by symmetric N -th order tensors, which allows an efficient vector-based representation that not only supports coherent definitions of arithmetic operations on rotational symmetries but also enables many analysis and design operations for vector fields to be adapted to rotational symmetry fields. To demonstrate the effectiveness of our approach, we apply our design system to pen-and-ink sketching and geometry remeshing.
[ "rotational symmetry", "field design", "surfaces", "remeshing", "topology", "field analysis", "non-photorealistic rendering" ]
[ "P", "P", "P", "P", "P", "R", "M" ]
-1PqPmx
STEP-compliant CNC system for turning: Data model, architecture, and implementation
STEP-NC, a new data model for CADCAMCNC chain, is expected to encompass the whole scope of e-manufacturing. The new data model formalized as ISO 14649 is under development by ISO TC184 SC1 and SC4 for the replacement of the old standard so-called G&M codes, formalized as ISO 6983 which has been used since the 1950s. As the new data model is being established, development and implementation of STEP-compliant CAD/CAM/CNC system based on the new data model is drawing worldwide attention. Several systems have been reported in such international conventions as the ISO Expert Committee Meeting. Up to the present time, all the STEPCNC systems are intended for milling operation. In this paper, the authors first present STEP-compliant CNC system for turning system including the data model, followed by a generic architecture and functionality. Implementation results obtained from a prototype system called TurnSTEP have been provided. Based on the results, the authors are convinced of the validity of the STEP-NC data model together with the effectiveness of the STEPCNC system for turning.
[ "step-compliant cnc", "step-nc", "cadcamcnc chain", "iso 14649", "turning system", "e-manufacturing" ]
[ "P", "P", "P", "P", "P", "U" ]
-ngxPvW
A variational principle for coupled nonlinear Schrdinger equations with variable coefficients and high nonlinearity
Via Hes semi-inverse method, a variational principle is established for coupled nonlinear Schrdinger equations with variable coefficients and high nonlinearity. The result obtained includes the ones known from the open literature as special cases.
[ "variational principle", "nonlinear schrdinger equations", "semi-inverse method" ]
[ "P", "P", "P" ]
4ojF26Q
Metrics for Generalized Persistence Modules
We consider the question of defining interleaving metrics on generalized persistence modules over arbitrary preordered sets. Our constructions are functorial, which implies a form of stability for these metrics. We describe a large class of examples, inverse-image persistence modules, which occur whenever a topological space is mapped to a metric space. Several standard theories of persistence and their stability can be described in this framework. This includes the classical case of sublevelset persistent homology. We introduce a distinction between soft and hard stability theorems. While our treatment is direct and elementary, the approach can be explained abstractly in terms of monoidal functors.
[ "interleaving", "stability", "inverse-image persistence", "persistent topology", "sublinear projections", "superlinear families", "55u99", "68u05" ]
[ "P", "P", "P", "R", "U", "U", "U", "U" ]
2kZwD7C
Prenatal Exposure to Methamphetamine in the Rat
Methamphetamine (Meth) is an illicit substance known to interfere with catecholaminergic systems and a popular recreational drug among young adult women, that is, in gestational age. Tyrosine hydroxylase (TH), the rate-limiting enzyme of the synthetic pathway of catecholamines, is a good marker to assess potential effects of Meth in catecholaminergic (particularly in dopaminergic) systems. In the rat, prolonged neonatal Meth exposure altered several dopaminergic markers (TH activity and gene expression) in substantia nigra pars compacta (SN) and in caudate-putamen (TH activity) when animals matured. However, it was never verified whether gestational exposure to Meth might compromise TH enzyme in the pups during the neonatal immature periods. The present study was designed to address this issue by analyzing TH gene expression, measured by in situ hybridization in SN and ventral tegmental area (VTA), dopaminergic areas that are well characterized as target areas for Meth, and in rats prenatally exposed to this psychostimulant. To this end, dated pregnant Wistar rat dams received 5 mg Meth hydrochloride/kg body weight/day. It was administered subcutaneously from gestational day 8 until 22. The control group was pair-fed and saline injected, using the same experimental protocol as for Meth-treated dams. On the day of birth (postnatal day 0, PND 0), litters were culled to 8 pups, sex-balanced whenever possible, and were followed until the day of sacrifice (PND 7, 14, or 30). Meth treatment differentially affected TH mRNA levels in VTA and SN, in an age- and gender-dependent manner. Thus, TH mRNA levels were decreased in the VTA of PND 7 and PND 14 females gestationally exposed to Meth; this effect was not evident in males or on PND 30. TH mRNA levels also tend to decrease in SN of PND 14 females gestationally exposed to Meth. Collectively, the present results indicated that gestational Meth exposure affects TH gene expression in the postnatal life, a phenomenon that appears to be transient, since it is no longer evident by the end of the first month of life in the rat.
[ "prenatal", "methamphetamine", "rat", "tyrosine hydroxylase", "gene expression", "substantia nigra", "ventral tegmental area", "wistar rat", "postnatal" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
1&tktHk
Detection of abnormality in the electrocardiogram without prior knowledge by using the quantisation error of a self-organising map, tested on the European ischaemia database
Most systems for the automatic detection of abnormalities in the ECG require prior knowledge of normal and abnormal ECG morphology from pre-existing databases. An automated system for abnormality detection has been developed based on learning normal ECG morphology directly from the patient The quantisation error from a self-organising map 'learns' the form of the patient's ECG and detects any change in its morphology. The system does not require prior knowledge of normal and abnormal morphologies. It was tested on 76 records from the European Society of Cardiology database and detected 40.5% of those first abnormalities declared by the database to be ischaemic. The system also responded to abnormalities arising from ECG axis changes and slow baseline drifts and revealed that ischaemic episodes are often followed by long-term changes in ECG morphology.
[ "ecg analysis", "morphology change detection", "ischaemia detection", "self-organizing map", "neural networks" ]
[ "M", "R", "R", "M", "U" ]
12jv6Xq
Designing e-collaboration technologies to facilitate compensatory adaptation
This article argues that e-collaboration technologies often pose obstacles to effective communication in complex collaborative tasks. The reason presented is that typically those technologies selectively suppress face-to-face communication elements that human beings have been designed by evolution to use extensively, while communicating with each other. It is argued that technology users invariably react to those obstacles by engaging in compensatory adaptation, whereby they change their communicative behavior in order to compensate for the obstacles. The article concludes with a call for more research on how e-collaboration technologies can be designed to facilitate compensatory adaptation.
[ "compensatory adaptation", "electronic communication", "electronic collaboration", "media naturalness", "knowledge communication" ]
[ "P", "M", "M", "U", "M" ]
1:yjgQA
Invariant based quartet puzzling
First proposed by Cavender and Felsenstein, and Lake, invariant based algorithms for phylogenetic reconstruction were widely dismissed by practicing biologists because invariants were perceived to have limited accuracy in constructing trees based on DNA sequences of reasonable length. Recent developments by algebraic geometers have led to the construction of lists of invariants which have been demonstrated to be more accurate on small sequences, but were limited in that they could only be used for trees with small numbers of taxa. We have developed and tested an invariant based quartet puzzling algorithm which is accurate and efficient for biologically reasonable data sets.
[ "invariants", "quartet puzzling", "phylogenetic reconstruction" ]
[ "P", "P", "P" ]
-zo5gVd
Inter-terminology mapping of nursing problems
Fifty-six percent of ICNP diagnosis/outcome concepts map to SNOMEDCT. Mapping challenges include lexical variations and semantic differences. Coverage of positive nursing diagnostic concepts in SNOMEDCT is low. Some judgment concepts were interchangeably used in nursing diagnostic concepts. A broad community-based review of cross-mapping is underway for quality assurance.
[ "mapping", "terminology", "international classification for nursing practice", "systematized nomenclature of medicineclinical terms" ]
[ "P", "U", "M", "M" ]
4:CfBom
Table-based Alpha Compression
In this paper we investigate low-bitrate compression of scalar textures such as alpha maps, down to one or two bits per pixel. We present two new techniques for 4 x 4 blocks, based on the idea from ETC to use index tables. We demonstrate that although the visual quality of the alpha maps is greatly reduced at these low bit rates, the quality of the final rendered images appears to be sufficient for a wide range of applications, thus allowing bandwidth savings of up to 75%. The 2 bpp version improves PSNR with over 2 dB compared to BTC at the same bit rate. The 1 bpp version is, to the best of our knowledge, the first public 1 bpp texture compression algorithm, which makes comparison hard. However, compared to just DXT5-compressing a subsampled texture, our 1 bpp technique improves PSNR with over 2 dB. Finally, we show that some aspects of the presented algorithms are also useful for the more common bit rate of four bits per pixel, achieving PSNR scores around 1 dB better than DXT5, over a set of test images.
[ "computer graphics [i.3.7]: texture" ]
[ "M" ]
1EHF26w
Switching LTI Models via Haar Transform for Non-Stationary Dynamic Systems Modeling
This paper presents a new technique for Linear Parameter Varying (LPV) and quasi-LPV system modeling, which can be used to represent a wide range of nonlinear dynamic systems, by a set of switched Linear Time Invariant (LTI) models. The switched LPV model is obtained by minimizing, for frozen parameter values, the error between its dynamics and that of the original system. The proposed method is based on the use of Haar Transform, particularly on its properties of approximating any square-integrable function by a piecewise constant function and on its ability to concentrate energy in few coefficients in the transform domain. Numerical results presented here show the good performance of the proposed technique.
[ "modeling", "haar transform", "nonlinear control" ]
[ "P", "P", "M" ]
1fjJ9sb
Advanced clustering methods for mining chemical databases in forensic science
Heroin and cocaine gas chromatography data are analyzed using several clustering techniques. A database with clusters confirmed by police investigation is used to assess the potential of the analysis of the chemical signature of these drugs in the investigation process. Results are compared to standard methods in the field of chemical drug profiling and show that conventional approaches miss the inherent structure in the data, which is highlighted by methods such as spectral clustering and its variants. Also, an approach based on genetic programming is presented in order to tune the affinity matrix of the spectral clustering algorithm. Results indicate that all algorithms show a quite different behavior on the two datasets, but in both cases, the data exhibits a level of clustering, since there is at least one type of clustering algorithm that performs significantly better than chance. This confirms the relevancy of using chemical drugs databases in the process of understanding the illicit drugs market, as information regarding drug trafficking networks can likely be extracted from the chemical composition of drugs. (C) 2007 Elsevier B.V. All rights reserved.
[ "forensic science", "gas chromatography", "spectral clustering", "machine learning", "pattern analysis", "kernel methods" ]
[ "P", "P", "P", "U", "M", "M" ]
1YU18RS
New criterion for nonlinearity of block ciphers
For years, the cryptographic community has searched for good nonlinear functions. Bent functions, almost perfect nonlinear functions, and similar constructions have been suggested as a good base for cryptographic applications due to their highly nonlinear nature. In the first part of this paper, we examine using these functions as block ciphers, and present several distinguishers between almost perfect nonlinear permutations and random permutations. In the second part, of the paper, we suggest a criterion to measure the effective linearity of a given block cipher. We devise a general distinguisher for block ciphers based on their effective linearity. Finally, we show that for several constructions, our distinguishing attack is better than previously known techniques.
[ "almost perfect nonlinear permutations", "effective linearity", "differential cryptanalysis", "highly nonlinear functions" ]
[ "P", "P", "U", "R" ]
-Hprj:T
A socio-cognitive framework for designing interactive IR systems: Lessons from the Neanderthals
The article analyzes userIR system interaction from the broad, socio-cognitive perspective of lessons we can learn about human brain evolution when we compare the Neanderthal brain to the human brain before and after a small human brain mutation is hypothesized to have occurred 35,00075,000 years ago. The enhanced working memory mutation enabled modern humans (i) to decode unfamiliar environmental stimuli with greater focusing power on adaptive solutions to environmental changes and problems, and (ii) to encode environmental stimuli in more efficient, generative knowledge structures. A sociological theory of these evolving, more efficient encoding knowledge structures is given. These new knowledge structures instilled in humans not only the ability to adapt to and survive novelty and/or changing conditions in the environment, but they also instilled an imperative to do so. Present day IR systems ignore the encoding imperative in their design framework. To correct for this lacuna, we propose the evolutionary-based socio-cognitive framework model for designing interactive IR systems. A case study is given to illustrate the functioning of the model.
[ "neanderthals", "enhanced working memory", "decoding", "encoding", "information retrieval", "cognitive information retrieval", "information retrieval model", "evolutionary psychology" ]
[ "P", "P", "P", "P", "U", "U", "M", "U" ]
35LgmfJ
Correlative joint definition for motion analysis and animation
In this paper we address the problem of creating accurate joint models from real motions while allowing scalability. We propose an automatic method to model, scale, and simulate non-idealized joints from the external motion of markers. We demonstrate the method on the human knee joint modeling for musculoskeletal analysis and for character animation. The resulting joints, called correlative joints, are character and motion independent and rely on linear combinations of degrees of freedom calculated from multiple regression laws. We show that by using such models, inverse kinematics (IK) solvers find better solutions when tracking motions and solving constraints. Importing correlative joints into new models involves only minimal requirements on landmarks locations and no costly additional computations. Copyright (C) 2010 John Wiley & Sons, Ltd.
[ "animation", "virtual human", "motion analysis and modeling", "biomechanics" ]
[ "P", "M", "R", "U" ]
1RWiJCp
Relations among neural activities recorded in premotor and motor cortex of trained monkeys during visually guided hand and arm movement tasks
Earlier we have found synchronized oscillations in motor cortical areas of awake behaving monkeys (Sanes and Donoghue, Proc. Natl. Acad. Sci. USA 90 (1993) 44704474 and Donoghue et al., J. Neurophysiol. 79 (1998) 159173). In the reaching task, local field potentials (LFPs) were not directionally tuned, while neuronal responses were clearly modulated with the direction of arm movement. We are now screening the data with 7.5s records each, using power spectra, coherence, phase and cross-correlation functions in sliding windows and averaged for 20 trials aligned for target, go cue or movement onset. Our findings indicate that unique neural activity patterns characterize different time segments of a sensorimotor task which is also reflected in LFP recordings.
[ "premotor and motor cortex", "arm movement", "sensorimotor transformations" ]
[ "P", "P", "M" ]
392JB-z
Improved fault recovery for core based trees
The demand for multicast communication in wide-area networks, such as the internet, is increasing. Core based trees is one protocol that has been proposed to support scalable multicasting for sparse groups. When faults occur in the network nodes or links of the tree, the tree can become disconnected. In this paper, we propose an efficient protocol for recovering from faults in a core based tree. One of the key ideas is a technique for restructuring the disconnected subtree so that a loop-free path to the core can be found. The correctness of this protocol is also proved. (C) 2000 Elsevier Science B.V. All rights reserved.
[ "fault recovery", "core based tree", "multicast protocol", "sparse-mode multicasting" ]
[ "P", "P", "R", "M" ]
Ezydmhs
A linear approach for determining camera intrinsic parameters using tangent circles
A linear approach using three tangent circles is proposed for determining the intrinsic parameters of cameras. A projected circle can be used to compute the image coordinate of the centre of each tangent circle and to find the points in the image corresponding to the tangent points associated with the images of the centres of the projected circles. The vanishing point can be determined along the circle diameter according to the invariance of the cross-ratio. Solving the equations for the tangent lines produced a curve in the image of the tangent point and its corresponding point. The other vanishing point can be obtained from the intersection point of the two tangent lines. With vanishing points in two orthogonal directions, the intrinsic parameters can be linearly determined. The results of our experiments show that this approach is effective and highly precise.
[ "camera intrinsic parameters", "tangent circle", "vanishing point", "circular points" ]
[ "P", "P", "P", "M" ]
2:o2eGL
Performance prediction for parallel iterative solvers
In this paper, an exhaustive parallel library of sparse iterative methods and preconditioners in HPF and MPI was developed, and a model for predicting the performance of these codes is presented. This model can be used both by users and by library developers to optimize the efficiency of the codes, as well as to simplify their use. The information offered by this model combines theoretical features of the methods and preconditioners in addition to certain practical considerations and predictions about aspects of the performance of their execution in distributed memory multiprocessors.
[ "performance prediction", "parallel iterative solvers", "hpf", "mpi", "sparse algebra" ]
[ "P", "P", "P", "P", "M" ]
4YT3aiC
Dynamic balancing of planar mechanisms using toric geometry
A mechanism is statically balanced if for any motion, it does not apply forces on the base. Moreover, if it does not apply torques on the base, the mechanism is said to be dynamically balanced. In this paper, a new method for determining the complete set of dynamically balanced planar four-bar mechanisms is presented. Using complex variables to model the kinematics of the mechanism, the static and dynamic balancing constraints are written as algebraic equations over complex variables and joint angular velocities. After elimination of the joint angular velocity variables, the problem is formulated as a problem of factorization of Laurent polynomials. Using tools from toric geometry including toric polynomial division, necessary and sufficient conditions for static and dynamic balancing of planar four-bar mechanisms are derived. (C) 2009 Elsevier Ltd. All rights reserved.
[ "dynamic balancing", "toric geometry", "static balancing", "planar four-bar mechanism", "newton polygon", "minkowski sum" ]
[ "P", "P", "P", "P", "U", "U" ]
7CVAZC&
open your eyes...speak your mind...help desk communicating
We have centralized and decentralized our Information Systems department at Baylor, so staying In Touch with the campus community as well as our own Information Technology (IT) group can be a difficult task these days. In this poster presentation we want to share with you the ways in which we communicate with each other and our campus community by means of our websites, phones, and knowledgebase system.The Help Desk staff provides front end support for all technology related calls. We utilize the HEAT Service & Support Software, an Automated Service Desk Solution by FrontRange Solutions USA Inc., to track and allocate all calls received at the Help Desk.During the past year we have reconstructed our Help Desk website using the Content Management System to make it more user-friendly for our clients. We have the option to display alerts for new virus threats or other vital information and a system maintenance section that displays any planned outages or maintenance that may affect our clients.For outages and any known problems with our servers, we post them to our DOWN line that can be accessed by dialing 254.710.3696 (DOWN). We are also implementing a knowledgebase to work with our Heat call tracking system so that clients can search a problem/answer on their own and if needed submit a ticket for an IT tech to fix the problem. This knowledgebase is also being customized for detailed support information that will be used by our Help Desk consultants and IT support staff.
[ "help desk", "communication", "it", "knowledgebase", "heat", "frontrange solutions", "content management system", "down", "baylor university" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
3wjJHuA
floorplan-based fpga interconnect power estimation in dsp circuits
A novel high-level approach for estimating power consumption of global interconnects in data-path oriented designs implemented in FPGAs is presented. The methodology is applied to interconnections between modules and depends only on their mutual distance and shape. The power model has been characterized and verified with on-board power measurements, instead of using low-level estimation tools which often lack the required accuracy (observed errors go up to 350\%). The results show that most of the errors of the presented power model lie within 20\% of the physical measurements. This is an excellent result considering that in [2] it is shown that there is already a 20\% variation in net capacitance due to the different routing solutions given by router for the same placement.
[ "fpga", "interconnects", "power estimation", "low power" ]
[ "P", "P", "P", "M" ]
zDrddaT
reliable and efficient message delivery in delay tolerant networks using rateless codes
In this paper, we consider the problem of multiple unicast message delivery in Delay Tolerant Networks (DTNs). Long delays, mobility of nodes, and lack of connectivity that are characteristics of such network make this problem very challenging. Additionally, expiry of packets in a network, considered a useful means of regulating resource consumption, reduces reliability and increases the latency of message delivery. Traditional approaches to message delivery in such networks was based on transmitting multiple copies of entire message blocks. Recently, the application of simple erasure-based codes to messages were considered. This option opened up an interesting area of research. In this paper, we effect reliable message delivery with improved latency even in the presence of packet expiry and intermittent connectivity by applying rateless codes at the source where a message is generated. We perform extensive simulations on a variety of trace data from mobility models such as the UMassDieselNet testbed, an area-based random waypoint model, and a simple campus bus model. Results reveal the superiority of our scheme in comparison to other present schemes.
[ "delay tolerant networks", "rateless codes", "poisson process" ]
[ "P", "P", "U" ]
2A3Q9Ns
A neural network approach for solving nonlinear bilevel programming problem ?
A neural network model is presented for solving nonlinear bilevel programming problem, which is a NP-hard problem. The proposed neural network is proved to be Lyapunov stable and capable of generating approximal optimal solution to the nonlinear bilevel programming problem. The asymptotic properties of the neural network are analyzed and the condition for asymptotic stability, solution feasibility and solution optimality are derived. The transient behavior of the neural network is simulated and the validity of the network is verified with numerical examples.
[ "neural network", "nonlinear bilevel programming", "optimal solution", "asymptotic stability" ]
[ "P", "P", "P", "P" ]
2&TWNEM
Subharmonic solutions and homoclinic orbits of second order discrete Hamiltonian systems with potential changing sign
In this paper, a class of second order discrete Hamiltonian systems with periodicity assumptions is considered. Based on the critical point theory, some sufficient conditions for the existence of subharmonic solutions and homoclinic orbits are obtained. The results obtained extend the results in 2007 by relaxing the assumptions on the sign of the potential.
[ "subharmonic solutions", "homoclinic orbits", "second order discrete hamiltonian systems", "critical point theory", "periodic solutions" ]
[ "P", "P", "P", "P", "R" ]
4PwM1Nc
Performance evaluation of the correntropy coefficient in automatic modulation classification
We propose the direct use of the correntropy for modulation format recognition. We investigate the influence of kernel size on the performance of the classifier. The absence of a pre-processing module in the classifier reduces the complexity. The method was shown to be highly efficient and scalable for wireless systems.
[ "correntropy coefficient", "automatic modulation classification", "information theory" ]
[ "P", "P", "U" ]
3YU8JT-
Parallel and distributed implementation of large industrial applications
Parallelization of high performance computing applications has been a field of active research for quite some time now. Most projects that have parallelized industrial software packages have focused on the specific application and did not attempt to document and generalize their lessons learned. We report on results of a project that has parallelized a state of the art industrial computational fluid dynamics (CFD) packages and that explicitly aimed at establishing software engineering guidelines for future similar projects. Thanks to the consequent application of the software engineering guidelines defined for the project, the parallel CFD code has proven excellent efficiency and scalability on a large number of parallel hardware platforms. The project also addressed software engineering issues such as object orientation and resource management. The CFD package has been redesigned as an object oriented program and implemented in C++ and Java. The object oriented CFD program has shown reasonable efficiency in preliminary benchmark tests. We expect that further optimizations to the OO code and advances in compiler technology will make the performance gap (relative to the fortran version) almost disappear in the near future. A resource manager has been developed that allows production runs of parallel scientific computing software to execute in batch mode on networks of workstations by dynamically allocating resources to parallel batch jobs that currently are not claimed by interactive users. The resource manager has been successfully tested with the parallel CFD code as workload.
[ "cfd", "software engineering", "resource management", "object oriented programming", "scientific computing", "parallel processing" ]
[ "P", "P", "P", "P", "P", "M" ]
-9mxh:U
GAIN-BANDWIDTH TRADE-OFF IN THE CMOS CASCODE AMPLIFIER
The cascode amplifier has the potential of providing high gain and high bandwidth simultaneously. However, the design is not as intuitive as one might at first think. In this paper, we present a detailed analysis of the single cascode amplifiers. The relationship between gain and bandwidth is important. When used to achieve maximum bandwidth the voltage gain of the common-source stage is close to unity. However, when the cascode is designed to obtain a high voltage gain, then the gain-bandwidth trade-off, typical in the common source amplifier, reappears. This analysis is used to provide the basis for practical cascode amplifier design.
[ "cascode", "amplifier", "gain-bandwidth product" ]
[ "P", "P", "M" ]
1:ETEpy
Combining matheuristics and MILP to solve the accessibility windows assembly line balancing problem level 2 (AWALBP-L2)
We propose an approach combining a matheuristic and a MILP model to solve the variant Level 2 of the Accessibility Windows Assembly Line Balancing Problem (AWALBP-L2). This is a novel problem that arises in those real-world assembly lines where, in contrast to the most common ones, the length of the workpieces is larger than the widths of the workstations. This means that, at any time, a workstation cannot access an entire workpiece, but only a restricted portion of a workpiece or two consecutive workpieces. As a result, a workstation can only perform, at any time, the subset of tasks that fall inside its accessible area. The problem is to solve the task assignment and the movement scheme subproblems, while minimizing the cycle time. The proposed solving approach consists of (i) a matheuristic to generate good feasible solutions and compute bounds and (ii) a MILP model that makes use of the obtained bounds. A computational study is carried out to compare the performance of the proposed approach with the existing literature.
[ "accessibility windows", "assembly line balancing" ]
[ "P", "P" ]
4AD4wVo
Multisize Sliding Window in Workload Estimation for Dynamic Power Management
Energy efficiency has become one of the key challenges for a large class of electronic systems. Longer time between battery recharges is highly desirable for battery-powered devices such as mobile phones, digital cameras, Internet tablets, and electronic organizers. Energy efficiency is also important for electronic systems powered from the electric grid since it may reduce power consumption and the cooling requirements. Power savings are possible because electronic systems generally have an idle state, when, for example, the processor can run in a low-power state. Thus, the correct estimation of the workload model plays an essential role in the decision of which and when a power state transition should be performed by the electronic system. This paper introduces a multisize sliding window workload estimation technique for dynamic power management (DPM) in nonstationary environments. This technique reduces both the effects of identification delay and sampling error present in the previous fixed-size sliding window approach. The system is modeled by discrete-time Markov chains and the model offers a rigorous mathematical formulation of the problem and allows one to obtain an excellent trade-off between performance and power consumption.
[ "estimation", "power management", "workload model", "energy-aware systems", "markov models" ]
[ "P", "P", "P", "M", "R" ]
332gM3V
A discontinuous hp finite element method for the Euler and Navier-Stokes equations
This paper introduces a new method for the solution of the Euler and Navier-Stokes equations, which is based on the application of a recently developed discontinuous Galerkin technique to obtain a compact, higher-order accurate and stable solver. The method involves a weak imposition of continuity conditions on the state variables and on inviscid and diffusive fluxes across inter-element and domain boundaries. Within each element the field variables are approximated using polynomial expansions with local support; therefore, this method is particularly amenable to adaptive refinements and polynomial enrichment. Moreover, the order of spectral approximation on each element can be adaptively controlled according to the regularity of the solution. The particular formulation on which the method is based makes possible a consistent implementation of boundary conditions, and the approximate solutions are locally (elementwise) conservative. The results of numerical experiments for representative benchmarks suggest that the method is robust, capable of delivering high rates of convergence, and well suited to be implemented in parallel computers. Copyright (C) 1999 John Wiley & Sons, Ltd.
[ "euler", "navier-stokes", "discontinuous galerkin" ]
[ "P", "P", "P" ]
c3oopp1
The H-2 dissociation on the BN, AlN, BP and AlP nanotubes: a comparative study
The thermodynamic and kinetic feasibility of H-2 dissociation on the BN, AlN, BP and AlP zigzag nanotubes has been investigated theoretically by calculating the dissociation and activation energies. We determined the BN and AlP tubes to be inert toward H-2 dissociation, both thermodynamically and kinetically. The reactions are endothermic by 5.8 and 3 kcal mol(-1), exhibiting high activation energies of 38.8 and 30.6 kcal mol(-1), respectively. Our results indicated that H-2 dissociation is thermodynamically favorable on both PB and AlN nanotubes. However, in spite of the thermodynamic feasibility of H-2 dissociation on PB types, this process is kinetically unfavorable due to partly high activation energy. Generally, we concluded that among the four studied tubes, the AlN nanotube may be an appropriate model for H-2 dissociation process, from a thermodynamic and kinetic stand point. We also indicated that H-2 dissociation is not homolytic, rather it takes place via a heterolytic bond cleavage. In addition, a comparative study has been performed on the electrical and geometrical properties of the tubes. Our analysis showed that the electrical conductivity of tubes is as follows: BP > AlP > BN > AlN depending on how to combine the electron rich and electron poor atoms.
[ "aluminum nitride nanotubes", "aluminum phosphide nanotubes", "boron nitride nanotubes", "boron phosphide nanotubes", "density functional theory", "h-2 adsorption" ]
[ "M", "M", "M", "M", "U", "M" ]
574DrG8
Motion-based unusual event detection in human crowds
Analyzing human crowds is an important issue in video surveillance and is a challenging task due to their nature of non-rigid shapes. In this paper, optical flows are first estimated and then used for a clue to cluster human crowds into groups in unsupervised manner using our proposed method of adjacency-matrix based clustering (AMC). While the clusters of human crowds are obtained, their behaviors with attributes, orientation, position and crowd size, are characterized by a model of force field. Finally, we can predict the behaviors of human crowds based on the model and then detect if any anomalies of human crowd(s) present in the scene. Experimental results obtained by using extensive dataset show that our system is effective in detecting anomalous events for uncontrolled environment of surveillance videos.
[ "unusual event detection", "video surveillance", "optical flows", "human crowd analysis", "unsupervised clustering", "force field model", "adjacency matrix", "spatial-temporal analysis" ]
[ "P", "P", "P", "M", "R", "R", "U", "U" ]
2Mfz3cC
On applying stochastic network calculus
Performance evaluation plays a crucial role in the design of network systems. Many theoretical tools, including queueing theory, effective bandwidth and network calculus, have been proposed to providemodeling mechanisms and results. While these theories have been widely adopted for performance evaluation, each has its own limitation. With that network systems have become more complex and harder to describe, where a lot of uncertainty and randomness exists, to make performance evaluation of such systems tractable, some compromise is often necessary and helpful. Stochastic network calculus (SNC) is such a theoretical tool. While SNC is a relatively new theory, it is gaining increasing interest and popularity. In the current SNC literature, much attention has been paid on the development of the theory itself. In addition, researchers have also started applying SNC to performance analysis of various types of systems in recent years. The aim of this paper is to provide a tutorial on the new theoretical tool. Specifically, various SNC traffic models and SNC server models are reviewed. The focus is on how to apply SNC, for which, four critical steps are formalized and discussed. In addition, a list of SNC application topics/areas, where there may exist huge research potential, is presented.
[ "stochastic network calculus", "bound performance", "arrival curve", "service curve" ]
[ "P", "M", "U", "U" ]
qv-GU9X
Practical implementations of a non-disclosure fair contract signing protocol
Contract signing is a practical application of the fair exchange of digital signatures. This application used to be realized by directly adopting the results of the fair exchange of signatures, which do not completely meet the requirements of the signing of a secret contract. The assistance of a trusted third party (TTP) and some cryptographic technology are required to allow two parties to exchange their signatures through the network in a fair manner because these two parties potentially may be dishonest or mistrust each other. This paper presents a subtle method of preventing the off-line TTP from gaining the exchanged signature and the corresponding message when a dispute occurs between the two parties wherein the TTP is required to take part in the exchange procedure. An advanced concept, the non-disclosure property, is proposed in order to prevent a party from misusing evidence left during the exchange process. Two approaches, namely the secret divide method and the convertible signature are demonstrated. To satisfy the properties of the traditional paper-based contract signing, the technique of multi-signature scheme is used in the proposed protocols.
[ "contract signing", "fair exchange", "third party", "cryptography", "semi-trusted", "electronic commerce" ]
[ "P", "P", "P", "U", "U", "U" ]
Yqq1uxm
Optimal portfolio selection with liability management and Markov switching under constrained variance
In this paper, we mainly discuss an optimal portfolio selection model with liability management and Markov switching which maximize the expected final surplus under constrained variance. Because linear quadratic control is a basic method for the M-V problem, in this paper we begin with the general stochastic linear quadratic model, and obtain the optimal solution of the problem. Exactly, the analytical optimal portfolio strategy is derived in this paper. Furthermore, we demonstrate that a special case is consistent with those results of Chiu and Li (2006)[3]. (C) 2010 Elsevier Ltd. All rights reserved.
[ "portfolio selection", "liability management", "markov switching", "constrained variance", "mean-variance model" ]
[ "P", "P", "P", "P", "M" ]
4u9AScP
Feature selection using Principal Component Analysis for massive retweet detection ?
Social networks become a major actor in massive information propagation. In the context of the Twitter platform, its popularity is due in part to the capability of relaying messages (i.e. tweets) posted by users. This particular mechanism, called retweet, allows users to massively share tweets they consider as potentially interesting for others. In this paper, we propose to study the behavior of tweets that have been massively retweeted in a short period of time. We first analyze specific tweet features through a Principal Component Analysis (PCA) to better understand the behavior of highly forwarded tweets as opposed to those retweeted only a few times. Finally, we propose to automatically detect the massively retweeted messages. The qualitative study is used to select the features allowing the best classification performance. We show that the selection of only the most correlated features, leads to the best classification accuracy (F-measure of 65.7%), with a gain of about 2.4 points in comparison to the use of the complete set of features.
[ "feature selection", "principal component analysis", "massive retweet", "classification" ]
[ "P", "P", "P", "P" ]
-bo7UVM
Bidding behaviors in duopoly electricity markets with aspirant market share goals
The deregulation and restructuring of electricity markets have created a variety of challenging research problems. In addition, due to the complexity of electricity markets, most of these research problems are not amenable to analytical methods. Agent-based simulation is an approach for simulating and analyzing complex systems with interacting autonomous agents. In this paper, we use an agent-based approach to study the following emergent problem related to electricity market share and competition: what happens if a market participant tries to reach the following two (sometimes conflicting) goals simultaneously, (1) reaching an aspirant market share goal and (2) maximizing profit? More interestingly, what happens if two such participants are competing with each other? The developed agent-based model allows us to examine how the market share goal and profit maximization goal together influence the bidding behaviors of generation companies (i.e. agents) in a day-ahead electricity auction market. It also reveals that conservative market share goals often lead to a collusive behavior and profit maximization. However, if every participant has an aggressive market-share goal, a price war would result. On the other hand, if agents bear unequal market-share goals (e.g. one aggressive and one conservative), one agent will become more profitable than the other. As a result, if both agents want to maximize profit, they will both bid aggressively, resulting in a price war. Therefore, the agent-based model produces results that may explain some real-world pricing outcomes. In addition, to benchmark our agent-based model and to demonstrate the effect of the market-share goal, we develop an analytical model without the market-share goal and compare its results with those from the agent-based model.
[ "bidding behavior", "electricity market", "agent-based simulation", "collusive behavior", "price war", "cyclical pricing dynamics" ]
[ "P", "P", "P", "P", "P", "M" ]
2jGj&gi
Analyze the eigen-structure of DS-SS signals under narrow band interferences
In this paper, an approach of eigen-structure analysis for DS-SS (direct sequence spread spectrum) signals under NBI (narrow band interference) is proposed, which can estimate the PN (pseudo-noise) sequence blindly in lower SNR (signal-to-noise ratio) and SIR (signal-to-interference ratio) DS-SS signals. Of course, some parameters of DS-SS signals (such as period and chip interval of PN sequence) need to be known. First, the received signal is divided into continued nonoverlapping temporal vectors according to the period of PN sequence, and then we calculate and accumulate the correlation matrix of these vectors one by one. An operation of eigenvalue decomposition may be applied to the matrices; we can estimate the eigen-structure (which include the NBI eigen-waveforms and PN sequences) of received signals from the principal component eigenvectors blindly in the end. Based on the estimated NBI eigen-waveforms and PN sequences, the NBI can be rejected, and the DS-SS signals can be de-spreaded without the PN sequence too. Theoretic analysis and experimental results show that the approach is very effective. It can work well on the lower SNR and SIR ambient.
[ "eigen-analysis", "direct sequence spread spectrum (ds-ss) signal", "narrow band interference (nbi)", "pseudo-noise (pn) sequence", "spread-spectrum de-spreading without the pn sequence" ]
[ "U", "R", "R", "R", "M" ]
4xJgyse
data management in the cartel mobile sensor computing system
We propose a reusable data management system, called CarTel, for querying and collecting data from intermittently connected devices. CarTel provides a simple, incrementally-deployable platform for developing automobile-based sensor applications. Our platform provides a dynamic query system that allows both continuous (standing) and one-shot geo-spatial queries over car position, speed, and sensory data as well as a both a low-cost/high-bandwidth substrate for communicating with a large network of mobile devices.
[ "mobility", "intermittent connectivity", "wireless", "sensor networks", "query processing" ]
[ "P", "P", "U", "R", "M" ]
14rvsJ9
Musical-based interaction system for the Waseda Flutist Robot
Since 1990, at Waseda University the development on the Anthropomorphic Flutist Robot has been focused on mechanically reproducing the physiology of the organs involved during the flute playing (i.e. lungs, lips, etc.) and implementing basic cognitive capabilities to interact with flutist beginners. As a results of the research efforts done until now, the Waseda Flutist Robot is considered to play the flute nearly similar to the performance of a intermediate human player. However, we consider that in order to extend the interaction capabilities of the flutist robot with musical partners, further research efforts should be done. In this paper, we propose as a long-term goal to enable the flutist robot to interact more naturally with musical partners on the context of a Jazz band. For this purpose a Musical-Based Interaction System (MbIS) is proposed to enable the robot the process both visual and aural cues coming throughout the interaction with musicians. In particular, in this paper, the details of the implementation of the visual tracking module on the Waseda Flutist Robot No. 4 Refined IV (WF-4RIV) is presented. The visual tracking module is composed by two levels of interaction: basic (visual interface for the musician based on controlling virtual buttons and faders) and advanced (instrument tracking system so that the robot can process motion gestures performed by the musical partner in real-time which are then directly mapped into musical parameters of the performance of the robot). The experiments carried out were focused in verifying the effectiveness and usability of the proposed levels of interaction. In particular, we focused on determining how well our the WF-4RIV dynamically changes musical parameters while interacting with a human player. From the experimental results we observed that the physical constraints of the robot play an important role during the interaction. Although further improvements should be done to overcome such constrains, we expect that the interaction experience may become more natural.
[ "music", "human-robot interaction", "particle filter" ]
[ "P", "M", "U" ]
Bb8UQvJ
Effects of temperature on the optical and electrical properties of ZnO nanoparticles synthesized by solgel method
ZnO nanoparticles for different thermal treatment temperatures has been synthesized by solgel method. The structural, compositional and morphological properties of the powder were reported. The size of as-prepared nanoparticles is about 3554nm. The nanoparticles after thermal treatment at 500C in air shows a strong luminescence band around 384nm in the UV range. The imaginary and real parts of the sample impedance versus frequency, were studied in the range of 40Hz2MHz.
[ "nanoparticles", "solgel", "zinc oxide", "optical materials", "electrical conductivity" ]
[ "P", "P", "U", "M", "M" ]
Px5zTco
Measuring the Resilience of the Trans-Oceanic Telecommunication Cable System
Resilience is the ability of the system to both absorb shock as well as to recover rapidly from a disruption so that it can return back to its original service delivery levels or close to it. The trans-oceanic telecommunication fiber-optics cable network that serves as the backbone of the internet is a particularly critical infrastructure system that is vulnerable to both natural and man-made disasters. In this paper, we propose a model to measure the base resiliency of this network, and explore the node to node and the overall resiliency of the network using existing data for demand, capacity and flow information. The submarine cable system is represented by a network model to which hypothetical disruptions can be introduced. The base resiliency of the system can be measured as the ratio of the value delivery of the system after a disruption to the value deliver of the system before a disruption. We further demonstrate how the resiliency of the trans-oceanic telecommunication cable infrastructure is enhanced through vulnerability reduction.
[ "resiliency", "internet", "infrastructure", "vulnerability" ]
[ "P", "P", "P", "P" ]
Q&GrWG7
A linear time special case for MC games
MC games are infinite duration two-player games played on graphs. Deciding the winner in MC games is equivalent to the the modal mu-calculus model checking. In this article we provide a linear time algorithm for a class of MC games. We show that, if all cycles in each strongly connected component of the game graph have at least one common vertex, the winner can be found in linear time. Our results hold also for parity games, which are equivalent to MC games.
[ "mc games", "model checking", "parity games" ]
[ "P", "P", "P" ]
1KdXRyr
Diffeomorphic Active Contours
In this study we present a geometric flow approach to the segmentation of three-dimensional medical images obtained from magnetic resonance imaging (MRI) or computed tomography (CT) scan methods, by minimizing a cost function. This energy term is based on the intensity of the original image, and its minimum is found following a gradient descent curve in an infinite-dimensional space of diffeomorphisms (Diff) to preserve topology. The general framework is reminiscent of variational shape optimization methods but remains closer to general developments on the deformable template theory of geometric flows. In our case, the metric that provides the gradient is defined as a right-invariant inner product on the tangent space (V) at the identity of the group of diffeomorphisms, following the general Lie group approach suggested by Arnold [J. Mec., 5 (1966), pp. 29-43]. To avoid local solutions of the optimization problem and to mitigate the influence of several sources of noise, a finite set of control points is defined on the boundary of the template binary images, yielding a projected gradient descent on Diff.
[ "deformable templates", "groups of diffeomorphisms", "image segmentation", "shape analysis" ]
[ "P", "P", "R", "M" ]
2uyc3jL
Task-based scanpath assessment of multi-sensor video fusion in complex scenarios
The combining of visible light and infrared visual representations occurs naturally in some creatures, including the rattlesnake. This process, and the wide-spread use of multi-spectral multi-sensor systems, has influenced research into image fusion methods. Recent advances in image fusion techniques have necessitated the creation of novel ways of assessing fused images, which have previously focused on the use of subjective quality ratings combined with computational metric assessment. Previous work has shown the need to apply a task to the assessment process; the current work continues this approach by extending the novel use of scanpath analysis. In our experiments, participants were shown two video sequences, one in high luminance (HL) and one in low luminance (LL), both featuring a group of people walking around a clearing of trees. Each participant was shown visible and infrared (IR) inputs alone: and side-by-side (SBS); in an average (AVE) fused; a discrete wavelet transform (DWT) fused; and a dual-tree complex wavelet transform (DT-CWT) fused displays. Participants were asked to track one individual in each video sequence, as well as responding by key press when other individuals carried out secondary actions. Results showed the SBS display to lead to much poorer accuracy than the other displays, while reaction times in carrying out the secondary task favoured AVE in the HL sequence and DWT in the LL sequence. Results are discussed in relation to previous findings regarding item saliency and task demands, and the potential for comparative experiments evaluating human performance when viewing fused sequences against naturally occurring fusion processes such as the rattlesnake is highlighted. Crown Copyright (C) 2009 Published by Elsevier B.V. All rights reserved.
[ "image fusion", "scanpath analysis", "video assessment", "eye-tracking", "psychophysics" ]
[ "P", "P", "R", "U", "U" ]
4eb2mUW
A study on the consistency and significance of local features in off-line signature verification
The computerized verification of scanned, handwritten signatures has been extensively studied in the past decades, but there are still several possibilities for improvement in the field. To achieve better verification results, we propose a simplified probabilistic model for off-line signature verification. In our model, each of the verification steps can be mathematically described and, therefore, individually analyzed and improved. Using this model, we are able to predict the accuracy of a signature verification system based on just a few a priori known parameters, such as the cardinality and the quality of input samples. Several experiments have been conducted using our statistics-based classifier to confirm the assumptions and the results of our model. Based on the results, we can provide answers to several old questions within the field, such as why is it so hard to achieve error rates below 10% or how does the number of original samples and features affect the final error rates.
[ "off-line signature verification", "classification", "normal distribution", "biometrics" ]
[ "P", "U", "U", "U" ]
G1dB&bh
Shifting Paradigms in Dementia
Atrophy and cerebrovascular disease are the two most important magnetic resonance imaging (MRI) characteristics in the evaluation of dementia. On MRI, atrophy is the primary hallmark of neurodegenerative dementias including Alzheimer's disease (AD), while vascular dementia is characterized by the presence of ischemic vascular damage, such as territorial infarcts, lacunes, and white matter hyperintensities. Evidence is accumulating that vascular factors play an important role in the development of cognitive decline at old age and clinical AD. In the present article we present results of four recent MRI studies suggesting the additional involvement of small vessel disease in neurodegenerative disorders. Atrophy in the medial temporal lobe, as typically observed in AD, and small vessel disease often coincide. In terms of clinical significance, their effects may even be synergistic. The strict distinction between AD and vascular dementia is often artificial, as most patients suffer from both disorders to some extent. For the future, we see an important role for MRI in identifying those different compartments, regardless of clinical classification. Treatment could be directed by (and evaluated through) MRI patterns, rather than a diagnostic label
[ "dementia", "atrophy", "cerebrovascular disease", "alzheimer's disease", "small vessel disease", "neurodegenerative disorders", "medial temporal lobe", "white matter hyperintenstities" ]
[ "P", "P", "P", "P", "P", "P", "P", "M" ]
s49:y6Y
Academic team formation as evolving hypergraphs
This paper quantitatively explores the social and socio-semantic patterns of constitution of academic collaboration teams. To this end, we broadly underline two critical features of social networks of knowledge-based collaboration: first, they essentially consist of group-level interactions which call for team-centered approaches. Formally, this induces the use of hypergraphs and n-adic interactions, rather than traditional dyadic frameworks of interaction such as graphs, binding only pairs of agents. Second, we advocate the joint consideration of structural and semantic features, as collaborations are allegedly constrained by both of them. Considering these provisions, we propose a framework which principally enables us to empirically test a series of hypotheses related to academic team formation patterns. In particular, we exhibit and characterize the influence of an implicit group structure driving recurrent team formation processes. On the whole, innovative production does not appear to be correlated with more original teams, while a polarization appears between groups composed of experts only or non-experts only, altogether corresponding to collectives with a high rate of repeated interactions.
[ "team formation", "hypergraphs", "scientific collaboration", "social network analysis", "socio-semantic networks", "epistemic dynamics social cohesion" ]
[ "P", "P", "M", "M", "R", "M" ]
-AePoHq
Update on the Confederacy of European Patent Information User Groups (CEPIUG)
The Confederacy of European Patent Information User Groups (CEPIUG) was founded in 2008 to promote the sharing of experiences and expertise in patent searching across Europe. CEPIUG currently comprises patent information user groups from eight different European countries and is open to new members. Besides the exchange of information, CEPIUG also seeks to promote the coordination of European efforts in the fields of education and training of new entrants into the profession of patent searching, as well as seeking to establish a suitable certification scheme for patent information professionals.
[ "confederacy", "europe", "patent", "information", "certification", "association" ]
[ "P", "P", "P", "P", "P", "U" ]
4tphsP6
automated detection of errors and quality issues in audio-visual content
This paper presents a demonstration of a technology for automated detection of errors and quality issues in audio-visual content. The extensible system, called AVInspector , consists of a signal processing core combined with detection modules for universally applicable audio-visual analysis. The detection modules include algorithms to assess blocking artefacts, picture freezes, clipping, dropouts and noise. The technology can be used for both a stand-alone application and as library for integration within professional products. Typical purposes are the automated observing and monitoring of audio-visual applications. This includes the automated analysis of audio and video material at ingest or within an archive, real-time observations of streaming services and broadcasts as well as the assessment of transcoding results for multi-platform playout.
[ "audio/video quality", "content-based video analysis" ]
[ "M", "M" ]
3Yr6cru
On plane graphs with link component number equal to the nullity
In this paper, we study connected plane graphs with link component number equal to the nullity and call them near-extremal graphs. We first study near-extremal graphs with minimum degree at least 3 and prove that a connected plane graph G G with minimum degree at least 3 is a near-extremal graph if and only if G G is isomorphic to K4 K 4 , the complete graph with 4 vertices. The result is obtained by studying general graphs using the knowledge of bicycle space and the Tutte polynomial. Then a simple algorithm is given to judge whether a connected plane graph is a near-extremal graph or not. Finally we study the construction of near-extremal graphs and prove that all near-extremal graphs can be constructed from a loop and K4 K 4 by two graph operations.
[ "link component number", "nullity", "near-extremal graphs", "bicycle space", "tutte polynomial" ]
[ "P", "P", "P", "P", "P" ]
488WNNG
Completeness of Hutton [0,1]-quasi-uniform spaces
This paper deals with completeness of Hutton [0, 1]-quasi-uniform spaces. Recently, the first two authors, [J. Gutierrez Garcia, M.A. de Prada Vicente, Hutton [0, 1]-quasi-uniformities induced by fuzzy (quasi-)metric spaces, Fuzzy Sets and Systems 157 (2006), 755-766], have constructed a Hutton [0, 1] -quasi-uniformity induced by a fuzzy metric space (in the sense of George and Veeramani). In this paper, we define completeness of Hutton [0, 1]-quasi-uniform spaces as convergence of any stratified tight Cauchy [0, 1]-filter. Our main result states the equivalence between completeness of any fuzzy metric space (X, M, *) and completeness of the induced Hutton [0, 1]-quasi-uniformity U-M. Also it is proved that the Hutton [0, 1]-quasi-uniform space (X, U-M) has, in this context, a kind of completion that is unique up to uniform isomorphism. The obtained results come from an appropriate definition of Cauchy L-filter (where L stands for a complete lattice, with additional properties). (c) 2007 Elsevier B.V. All rights reserved.
[ "completeness", "completeness", "hutton [0,1]-quasi-uniform space", "fuzzy metric space", "cauchy l-filter", "probabilistic metric space", "uniform space", "t-norm", "completion" ]
[ "P", "P", "P", "P", "P", "M", "R", "U", "P" ]
2VhH-a3
extended static checking for haskell
Program errors are hard to detect and are costly both to programmers who spend significant efforts in debugging, and to systems that are guarded by runtime checks. Extended static checking can reduce these costs by helping to detect bugs at compile-time, where possible. Extended static checking has been applied to objectoriented languages, like Java and C#, but it has not been applied to a lazy functional language, like Haskell. In this paper, we describe an extended static checking tool for Haskell, named ESC/Haskell, that is based on symbolic computation and assisted by a few novel strategies. One novelty is our use of Haskell as the specification language itself for pre/post conditions. Any Haskell function (including recursive and higher order functions) can be used in our specification which allows sophisticated properties to be expressed. To perform automatic verification, we rely on a novel technique based on symbolic computation that is augmented by counter-example guided unrolling. This technique can automate our verification process and be efficiently implemented.
[ "counterexample guided unrolling", "pre/postcondition", "symbolic simplification" ]
[ "M", "M", "M" ]
19twM4V
A Jacobi-Davidson method for nonlinear and nonsymmetric eigenproblems
For the nonlinear eigenvalue problem T(lambda)x = 0 we consider a Jacobi-Davidson type: iterative projection method for computing a few eigenvalues close to a given parameter. We discuss the numerical solution of the projected eigenvalue problems in particular for nonsymmetric systems. We present methods how to prevent the algorithm from converging to the same eigenpair repeatedly. To verify the Jacobi-Davidson method it is applied to a rational eigenvalue problem governing damped vibrations of a structure and to a damped gyroscopic eigenvalue problem. (c) 2006 Civil-Comp Ltd. and Elsevier Ltd. All rights reserved.
[ "jacobi-davidson method", "nonlinear eigenvalue problem", "eigenvalue", "iterative projection method", "rational eigenproblem", "damped vibrations of structures" ]
[ "P", "P", "P", "P", "R", "R" ]
DoDePSx
interactive contents authoring system based on xmt and bifs
As the Internet has been widely used for all the area of industries, the traditional characteristics of communication and broadcasting has been merged into a new service such as interactivity based broadcasting. Under an interactive environment, viewers not only watch broadcasting programs provided from a contents provider such as traditional broadcastings, but also pass their requirements to a service provider, obtain supplementary information and search broadcasting programs on the basis of their individual component objects. In order to access a program in terms of its component contents, thus, it is required to compose a scene and have reactions on the basis of its individual objects rather than a whole program itself. Therefore, this paper introduces a new authoring system, which can easily and conveniently produce an interactive contents-based broadcasting program by using MPEG-4 technologies. The authoring system presented in this paper describes a program in terms of two formats such as a textual format (XMT) for readability and a binary format (BiFS) for transmission. Since the XMT format is a XML-like one, it is easier for a user to understand and edit a scene composition. Alterantively, BiFS is suitable for transmission because of its binary property.
[ "interactive contents", "authoring system", "mpeg-4" ]
[ "P", "P", "P" ]
3pnRJSs
Distributed Privacy-Preserving Access Control in Sensor Networks
The owner and users of a sensor network may be different, which necessitates privacy-preserving access control. On the one hand, the network owner need enforce strict access control so that the sensed data are only accessible to users willing to pay. On the other hand, users wish to protect their respective data access patterns whose disclosure may be used against their interests. This paper presents DP(2)AC, a Distributed Privacy-Preserving Access Control scheme for sensor networks, which is the first work of its kind. Users in DP(2)AC purchase tokens from the network owner whereby to query data from sensor nodes which will reply only after validating the tokens. The use of blind signatures in token generation ensures that tokens are publicly verifiable yet unlinkable to user identities, so privacy-preserving access control is achieved. A central component in DP(2)AC is to prevent malicious users from reusing tokens, for which we propose a suite of distributed token reuse detection (DTRD) schemes without involving the base station. These schemes share the essential idea that a sensor node checks with some other nodes (called witnesses) whether a token has been used, but they differ in how the witnesses are chosen. We thoroughly compare their performance with regard to TRD capability, communication overhead, storage overhead, and attack resilience. The efficacy and efficiency of DP(2)AC are confirmed by detailed performance evaluations.
[ "access control", "wireless sensor networks", "privacy", "security" ]
[ "P", "M", "U", "U" ]
1QqdcN&
New Architectural Design of CA-Based Codec
Cellular automata (CA) has already established its novelty for bits and bytes error correcting codes (ECC). The current work identifies weakness and limitation of existing CA-based byte ECC and proposes an improved CA-based double byte ECC which overcomes the identified weakness. The code is very much suited from VLSI design viewpoint and requires significantly less hardware and power for decoding compared to the existing techniques employed for Reed-Solomon (RS) Codes. Also it has been shown that the CA-based scheme can easily be extended for correcting more than two byte errors.
[ "cellular automata", "byte error correcting code (ecc)", "reed-solomon (rs) code" ]
[ "P", "P", "P" ]
-XLLdbB
A Robust and Efficient Message Passing Library for Volunteer Computing Environments
The objective of this research is to convert ordinary idle PCs into virtual clusters for executing parallel applications. The paper presents VolpexMPI that is designed to enable seamless forward application progress in the presence of frequent node failures as well as dynamically changing networks and node execution speeds. Process replication is employed to provide robustness. The central challenge in the design of VolpexMPI is to efficiently and automatically manage dynamically varying number of process replicas in different states of execution progress. The key fault tolerance technique employed is fully distributed sender based logging. The paper presents the design and an implementation of VolpexMPI. Preliminary results validate that the overhead of providing robustness is modest for applications with a favorable ratio of communication to computation and a low degree of communication.
[ "volunteer computing", "process replication", "message passing interface", "process failures", "message logging" ]
[ "P", "P", "M", "R", "R" ]
VCWQgwq
Universal hash functions for an infinite universe and hash trees
In this note we describe the adaptation of the universal hashing idea to an infinite universe, and to prefix hash trees. These are a structure underlying all extendible hash methods, which have up to now only been studied under the uniform hashing assumption. (C) 2008 Elsevier B.V. All rights reserved.
[ "universal hash functions", "hash trees", "extendible hashing", "data structures", "hash tables" ]
[ "P", "P", "P", "M", "M" ]
2fR6mfA
A note on the existence theorem of possibility space
In this note we provide a proof for the existence theorem of possibility space for uncountable case and verify Cai's conjecture in 1996 [1]. The proof paves the way to link possibility space or possibilistic variable to the fuzzy set theory.
[ "fuzzy set", "possibility theory", "possibility variable" ]
[ "P", "R", "R" ]
4fvSBA9
Extending the range of real time density matrix renormalization group simulations
We discuss a few simple modifications to time-dependent density matrix renormalization group (DMRG) algorithms which allow to access larger time scales. We specifically aim at beginners and present practical aspects of how to implement these modifications within any standard matrix product state (MPS) based formulation of the method. Most importantly, we show how to combine the Schrdinger and Heisenberg time evolutions of arbitrary pure states |?? | ? ? and operators A A in the evaluation of ?A??(t)=??|A(t)|?? ? A ? ? ( t ) = ? ? | A ( t ) | ? ? . This includes quantum quenches. The generalization to (non-)thermal mixed state dynamics ?A??(t)=Tr[?A(t)] ? A ? ? ( t ) = Tr [ ? A ( t ) ] induced by an initial density matrix ? ? is straightforward. In the context of linear response (ground state or finite temperature T>0 T > 0 ) correlation functions, one can extend the simulation time by a factor of two by exploiting time translation invariance, which is efficiently implementable within MPS DMRG. We present a simple analytic argument for why a recently-introduced disentangler succeeds in reducing the effort of time-dependent simulations at T>0 T > 0 . Finally, we advocate the python programming language as an elegant option for beginners to set up a DMRG code.
[ "density matrix renormalization group", "hints for beginners", "python code" ]
[ "P", "M", "R" ]
2&uZQ:u
Effects of chemical reaction, heat and mass diffusion in natural convection flow from an isothermal sphere with temperature dependent viscosity
Purpose - To investigate the effects of chemical reaction on natural convection heat and mass transfer from a sphere with temperature dependent viscosity. Design/methodology/approach - The governing boundary layer equations are transformed into a non-dimensional form and the resulting nonlinear system of partial differential equations are reduced to local non-similarity boundary layer equations, which are solved numerically by very efficient implicit finite difference method together with Keller box scheme. Findings - The effects of chemical reaction, the skin-friction coefficients, surface heat transfer rates, velocity and concentration distribution decrease as well as the mass transfer rates and temperature distribution increase within the boundary layer. Research limitations/implications - The investigation is valid for steady two-dimensional laminar flow. The concentration of the reactant is maintained at a constant value and the sphere is isothermal. An extension to unsteady flow with temperature dependent thermal conductivity is left for future work. Originality/value - This result provides guidance to engineers about heat and mass transfer with the effects of chemical reaction from isothermal spherical surface.
[ "chemical reactions", "convection", "viscosity" ]
[ "P", "P", "P" ]
1SceeLN
A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the DDMC simulation, a quantity that is required to calculate correct fluid motion in coupled radiation-hydrodynamics problems. With a set of numerical examples, we demonstrate that our improved DDMC method is accurate and can provide efficiency gains of several orders of magnitude over standard Monte Carlo. (c) 2006 Elsevier Inc. All rights reserved.
[ "hybrid transport-diffusion", "monte carlo", "radiative transfer" ]
[ "P", "P", "U" ]
4hzfwYx
A forward-trajectory global semi-Lagrangian transport scheme
A forward-trajectory semi-Lagrangian scheme for advection on the surface of the sphere is proposed. The advection scheme utilizes the forward (downstream) trajectory originating at Eulerian grid points and cascade interpolation, a sequence of 1D interpolations, to transfer data from the downstream Lagrangian points to the Eulerian points. A new and more accurate algorithm determines pole values. The resulting forward-trajectory semi-Lagrangian scheme can easily incorporate high-order trajectory integration methods. This avoids the standard iterative process in a typical backward-trajectory scheme. Two third-order accurate schemes and a second-order accurate scheme are presented. A mass-conservative version of the forward-trajectory semi-Lagrangian scheme is also derived within the cascade interpolation framework. Mass from a Lagrangian cell is transferred to the corresponding Eulerian cell with two 1D remappings through an intermediate cell system. Mass in the polar region is redistributed by way of an efficient local approximation. The resulting scheme is globally conservative, but restricted to meridional Courant number, C??1.
[ "cascade interpolation", "semi-lagrangian advection", "spherical geometry", "forward trajectory", "mass conservation", "rungekutta method" ]
[ "P", "R", "U", "R", "R", "M" ]
2:s-T26
Clinical information systems end user satisfaction: The expectations and needs congruencies effects
A model for clinical users satisfaction with clinical information systems (CIS). Empirical test of model using survey in a public hospital (112 doctors, 203 nurses). Perceived CIS performance is found to be the most influential satisfaction factor. The expectations congruency is the next significants satisfaction factor among doctors. The second significant satisfaction factor for nurses is found to be the needs congruency.
[ "clinical information systems", "end user satisfaction", "needs congruency", "expectations congruency" ]
[ "P", "P", "P", "P" ]
2w6FXmS
TED: A texture-edge descriptor for pedestrian detection in video sequences
This paper presents a novel descriptor, TED, for pedestrian detection in video sequences. TED describes texture and edge information simultaneously. TED is a local descriptor because it is defined over a neighborhood. The size of the TED, independent of the neighborhood size defined over it, is 8 bits. TED is based on intensity difference, and so it is robust against illumination changes. We demonstrate TED performance in a block-based framework for pedestrian detection. Experimental results show the effectiveness of the proposed descriptor when applied in different outdoor and indoor environments.
[ "texture", "edge", "pedestrian detection", "local binary pattern", "block-based approach", "background subtraction", "surveillance systems" ]
[ "P", "P", "P", "M", "M", "U", "U" ]
4LEEuy1
policy optimization for dynamic power management
Dynamic power management schemes (also called policies) can be used to control the power consumption levels of electronic systems, by setting their components in different states, each characterized by a performance level and a power consumption. In this paper, we describe power-managed systems using a finite-state, stochastic model. Furthermore, we show that the fundamental problem of finding an optimal policy which maximizes the average performance level of a system, subject to a constraint on the power consumption, can be formulated as a stochastic optimization problem called policy optimization. Policy optimization can be solved exactly in polynomial time (in the number of states of the model). We implemented a policy optimization tool and tested the quality of the optimal policies on a realistic case study.
[ "policy", "optimality", "dynamic power management", "power", "management", "scheme", "control", "power consumption", "systems", "component", "performance", "paper", "power-management", "stochastic modeling", "model", "constraint", "stochastic optimization", "polynomial", "timing", "tool", "quality", "case studies", "reconstruction", "visibility", "emulation", "functional simulation" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U", "U", "U" ]
1Q36s:b
A parameterized algorithm for the hyperplane-cover problem
We consider the problem of covering a given set of points in the Euclidean space R(m) by a small number k of hyperplanes of dimensions bounded by d, where d <= m. We present a very simple parameterized algorithm for the problem, and give thorough mathematical analysis to prove the correctness and derive the complexity of the algorithm. When the algorithm is applied on the standard HYPERPLANE-COVER problem in R(d), it runs in time O*(k((d-1)k)/1.3(k)) improving the previous best algorithm of running time O*(k(dk+d)) for the problem. When the algorithm is applied on the LINE-COVER problem in R(2), it runs in time O*(k(k)/1.35(k)) improving the previous best algorithm of running time O*(k(2)k/4.84(k)) for the problem. (C) 2010 Elsevier B.V. All rights reserved.
[ "parameterized algorithm", "hyperplane-cover", "line-cover", "computational geometry" ]
[ "P", "P", "P", "U" ]
3ew3Gdy
Honeymoon with IWBs: A qualitative insight in primary students' views on instruction with interactive whiteboard
The main purpose of this study was to investigate the views of primary students about interactive whiteboard [IWB] use in their classes from attitudinal and pedagogical perspectives. Research was designed as an empirical approach to phenomenology. Data was collected from fifty primary students (fourth to eighth) through focus group interviews. Nvivo 9 qualitative data analysis software was used to analyze data. Results showed that students like instruction with IWB especially for such reasons/capabilities as practical and economical use, better visual presentation, and test-based use. Students were predominantly uncomfortable with the technical problems. They believed that instruction with IWB positively impacted their learning especially because of visualization and contextualization, effective presentation, test-based use, and motivational factors. Finally it was inferred that IWBs were not used to their full potential, and both technical problems and common practices indicated that teachers were still at an initial stage of transmission to instruction with IWB and they needed both technical and pedagogical training. (C) 2012 Elsevier Ltd. All rights reserved.
[ "elementary education", "improving classroom teaching", "interactive learning environments", "media in education", "teaching/learning strategies" ]
[ "U", "U", "M", "M", "M" ]
-84jfki
Improvement of Dependability against Node Capture Attacks for Wireless Sensor Networks
A Wireless Sensor Network has sensor nodes which have limited computational power and memory size. Due to the nature of the network, the data is vulnerable to attacks. Thus, maintaining confidentiality is an important issue. To compensate for this problem, there are many countermeasures which utilize common or public key cryptosystems that have been proposed. However, these methods have problems with establishing keys between the source and the destination nodes. When these two nodes try to establish new keys, they must exchange information several times. Also, the routes of the Wireless Sensor Networks can change frequently due to an unstable wireless connection and batteries running out on sensor nodes. These problems of security and failure become more serious as the number of nodes in the network increases. In this paper, we propose a new data distribution method to compensate for vulnerability and failure based on the Secret Sharing Scheme. In addition, we will confirm the effect of our method through experiments. Concerning security, we compare our method with the existing TinySec, which is the major security architecture of Wireless Sensor Networks.
[ "node capture attack", "wireless sensor networks", "security", "secret sharing scheme", "key refreshment" ]
[ "P", "P", "P", "P", "M" ]
2RcrV4N
Performance evaluation of market-based resource allocation for Grid computing
Resource allocation is an important aspect of Grid computing. Over the past few years, various systems have been developed which use market mechanisms to allocate resources. However, the performance of such policies has not been sufficiently studied. In this paper, we investigate under which circumstances market-based resource allocation by continuous double auctions and by the proportional share protocol, respectively, outperforms a conventional round-robin approach. We develop a model for clients, servers and the market, and present simulation results. Factors which are studied include the amount of load in the system, the number of resources, different degrees of resource heterogeneity, and communication delays. Copyright (C) 2004 John Wiley Sons, Ltd.
[ "performance evaluation", "market-based resource allocation", "grid computing", "simulation" ]
[ "P", "P", "P", "P" ]
-X:jsnS
Greek letters in random staircase tableaux
In this paper we study a relatively new combinatorial object called staircase tableaux. Staircase tableaux were introduced by Corteel and Williams in the connection with Asymmetric Exclusion Process and has since found interesting connections with AskeyWilson polynomials. We develop a probabilistic approach that allows us to analyze several parameters of a randomly chosen staircase tableau of a given size. In particular, we obtain limiting distributions for statistics associated with appearances of Greek letters in staircase tableaux. 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2012
[ "staircase tableaux", "asymmetric exclusion process", "asymptotic normality" ]
[ "P", "P", "U" ]
3FH8m8F
Real-time triple product relighting using spherical local-frame parameterization
This paper addresses the problem of real-time rendering for objects with complex materials under varying all-frequency illumination and changing view. Our approach extends the triple product algorithm by using local-frame parameterization, spherical wavelets, per-pixel shading and visibility textures. Storing BRDFs with local-frame parameterization allows us to handle complex BRDFs and incorporate bump mapping more easily. In addition, it greatly reduces the data size compared to storing BRDFs with respect to the global frame. The use of spherical wavelets avoids uneven sampling and energy normalization of cubical parameterization. Finally, we use per-pixel shading and visibility textures to remove the need for fine tessellations of meshes and shift most computation from vertex shaders to more powerful pixel shaders. The resulting system can render scenes with realistic shadow effects, complex BRDFs, bump mapping and spatially-varying BRDFs under varying complex illumination and changing view at real-time frame rates on modern graphics hardware.
[ "real-time rendering", "spherical wavelets", "all-frequency relighting", "precomputed radiance transfer", "local frame" ]
[ "P", "P", "R", "U", "M" ]
-DR:PZS
Counting Markov Types, Balanced Matrices, and Eulerian Graphs
The method of types is one of the most popular techniques in information theory and combinatorics. Two sequences of equal length have the same type if they have identical empirical distributions. In this paper, we focus on Markov types, that is, sequences generated by a Markov source (of order one). We note that sequences having the same Markov type share the same so-called balanced frequency matrix that counts the number of distinct pairs of symbols. We enumerate the number of Markov types for sequences of length over an alphabet of size. This turns out to be asymptotically equivalent to estimating the number of the balanced frequency matrices, the number of integer solutions of a system of linear Diophantine equations, and the number of connected Eulerian multigraphs. For fixed, we prove that the number of Markov types is asymptotically equal to d(m)n(m2-m)/(m(2) - m)! where we give an integral representation for d(m). For m -> infinity, we conclude that asymptotically the number of types is equivalent to root 2m(3m/2)e(m2)/m(2m2)2(m)pi(m/2) n(m2-m) provided that m = o(n(1/4)). These findings are derived by analytical techniques ranging from analytic combinatorics, tomultidimensional generating functions, to the saddle point method.
[ "markov types", "eulerian graphs", "balance frequency matrices", "linear diophantine equations", "saddle point method", "multidimensional generating functions" ]
[ "P", "P", "P", "P", "P", "M" ]
36Vh-r&
Enhanced dynamic range analog filter topologies with a notch/all-pass circuit example
The signal handling capability of the filters is called dynamic range. In this paper, a topological form for the synthesis of filters with high dynamic range is proposed. A biquad notch/all-pass filter is shown in conformity with the given topological form. It is shown that there is a trade-off between dynamic range and high input impedance property. The presented circuit is compared with other notch filters in the literature. It has less number of components, better high-frequency response and dynamic range compared to others. Since the circuit includes a minimum number of resistors, it can easily provide electronically tunable circuits through resistor/controlled current conveyor replacement. Simulations are performed to verify the theoretical results. Routh-Hurwitz stability analyses are also given.
[ "dynamic range", "all-pass filter", "notch filter", "current conveyors", "first generation current conveyor" ]
[ "P", "P", "P", "P", "M" ]
2:KYqgW
Resistive switching effect on Al2O3/InGaAs stacks
We examine the post breakdown characteristics of metal gate/Al2O3/InGaAs. It is observed resistive switching (RS) effect. The oxidesubstrate interface is studied by X-ray photoelectron spectroscopy spectra. The AsO bonds are responsible for large frequency deviations and the RS effect.
[ "resistive switching", "high-k dielectrics", "oxide breakdown" ]
[ "P", "U", "M" ]
2p2:i3D
A Comparison of Tabular PDF Inversion Methods
The most common form of tabular inversion used in computer graphics is to compute the cumulative distribution table of a probability distribution (PDF) and then search within it to transform points, using an O(log n) binary search. Besides the standard inversion method, however, several other discrete inversion algorithms exist that can perform the same transformation in O(1) time per point. In this paper, we examine the performance of three of these alternate methods, two of which are new.
[ "pdf inversion", "importance sampling", "sample generation" ]
[ "P", "U", "U" ]
3pdpzit
An adaptive scheduling algorithm for differentiated services on WDM optical networks
One of the important issues in the design of future generation high-speed networks is to provide differentiated services to different types of applications with various time constraints. In this paper, we study the problem of providing real-time service to either hard or soft real-time messages in conjunction with a normal transmission service to variable-length messages without time constraints in Wavelength-Division-Multiplexing (WDM) optical networks. We propose an adaptive scheduling algorithm to schedule and manage message transmissions in single-hop passive-star coupler based WDM optical networks. We have analyzed the complexity of the algorithm to show its feasibility. In addition, we have conducted extensive discrete-event simulations to evaluate the performance of the proposed algorithm. This study suggests that when scheduling message transmission in WDM networks, a differentiated service should be designed to benefit the transmission of both real-time and non-real-time messages so that the overall performance of the network could be improved. (C) 2004 Elsevier B.V. All rights reserved.
[ "differentiated service", "optical networks", "wavelength-division-multiplexing", "real-time scheduling", "medium access control protocol" ]
[ "P", "P", "P", "R", "U" ]
34ed7io
a zkp-based identification scheme for base nodes in wireless sensor networks
Most of the published work on authentication mechanisms for wireless sensor networks establishes secure authentication for sensor nodes, but not for the base node that is in fact required to authenticate other nodes in the same network. This situation can lead to an attack whereby a malicious party masquerades as the base station and fraudulently authenticates other legitimate nodes to capture and/or inject messages within the network. The trust assumption in the existing literature with regard to base stations (i.e., implicitly trusting the base station) presents a serious security loophole. We address this problem by proposing a protocol that will help build a base station authentication mechanism in the framework of a one-hop mesh network and later extend it to a multi-hop framework. Our network would consist of a commissioning/installation device, and several forests of nodes (a base node and other nodes). The installation device would be responsible for deploying nodes in an area selected and would distribute information to them as necessary. We shall use a modification of the Guillou-Quisquater identification scheme as our Zero-Knowledge (ZK) protocol in conjunction with the ?TESLA protocol for authenticated broadcast, to authenticate the base station.
[ "base stations", "wireless security", "zero-knowledge protocol", "security protocols", "sensor and ad hoc networks", "entity authentication", "guillou-quisquater protocol" ]
[ "P", "R", "R", "R", "M", "M", "R" ]
3jMQG5c
Information technology offshoring in India: a postcolonial perspective
In recent years India has become the information technology (IT) offshoring destination of choice for many Western organizations. From the perspective of vendor organizations in India, however, the IT offshoring phenomenon is more than just a business relationship with Western firms. It is also embedded within the context of the longstanding imbalances of power in the relationship between the West and the East, the implications of which have been largely ignored in empirical work on offshoring within the information systems (IS) discipline. Drawing on concepts from postcolonial theory and using data from our ethnographic fieldwork, we explore the experiences and responses of one Indian vendor organization to asymmetries of power in its relationship with Western client organizations. Our analysis demonstrates how a postcolonial reading and interpretation of IT offshoring adds an important new dimension to previous IS research and also helps to develop a more comprehensive understanding of the strategies deployed by vendor organizations.
[ "india", "it offshoring", "postcolonial theory", "critical research", "ethnography" ]
[ "P", "P", "P", "M", "U" ]
4TterLV
Generic subset ranking using binary classifiers
A widespread idea to attack the ranking problem is by reducing it into a set of binary preferences and applying well studied classification methods. In particular, we consider this reduction for generic subset ranking, which is based on minimization of position-sensitive loss functions. The basic question addressed in this paper relates to whether an accurate classifier would transfer directly into a good ranker. We propose a consistent reduction framework guaranteeing that the minimal regret of zero for subset ranking is achievable by learning binary preferences assigned with importance weights. This fact allows us to further develop a novel upper bound on the subset ranking regret in terms of binary regrets. We show that their ratio can be at most 2 times the maximal deviation of discounts between adjacent positions. We also present a refined version of this bound when only the quality over the top rank positions is of concern. These bounds provide theoretical support on the use of the resulting binary classifiers for solving the subset ranking problem. (C) 2012 Elsevier B.V. All rights reserved.
[ "subset ranking", "position-sensitive measures", "regret bound" ]
[ "P", "M", "R" ]
-Ev3X8i
Effective traversal algorithms and hardware architecture for pyramidal inverse displacement mapping
Traversal algorithms and a hardware architecture for inverse displacement mapping. Up to 66% reduction in the number of iteration steps. Our hardware architecture is beneficial for mobile 3D graphics.
[ "hardware architecture", "displacement mapping", "graphics processors", "image pyramid" ]
[ "P", "P", "M", "M" ]
EPuMLAP
Diagnostic tools for evaluating and updating hidden Markov models
In this paper we consider two related problems in hidden Markov models (HMMs). One, how the various parameters of an HMM actually contribute to predictions of state sequences and spatio-temporal pattern recognition. Two, how the HMM parameters (and associated HMM topology) can be updated to improve performance. These issues are examined in the context of four different experimental settings from pure simulations to observed data. Results clearly demonstrate the benefits of applying some critical tests on the model parameters before using it as a predictor or spatio-temporal pattern recognition technique.
[ "hidden markov models" ]
[ "P" ]
1EZ-bTN
evaluation of cache-based superscalar and cacheless vector architectures for scientific computations
The growing gap between sustained and peak performance for scientific applications is a well-known problem in high end computing. The recent development of parallel vector systems offers the potential to bridge this gap for many computational science codes and deliver a substantial increase in comput-ing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of scientific computing areas. First, we present the performance of a microbenchmark suite that examines low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Results demonstrate that the SX-6 achieves high performance on a large fraction of our applications and often significantly outperforms the cache-based architectures. However, certain applications are not easily amenable to vectorization and would require extensive algorithm and implementation reengineering to utilize the SX-6 effectively.
[ "evaluation", "vectorization", "architecture", "scientific computing", "computation", "performance", "applications", "developer", "parallel", "systems", "computer science", "capabilities", "paper", "processor", "benchmark", "behavior", "demonstrate", "algorithm", "implementation", "reengineering", "cache", "high-performance" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U" ]
3jCeHtF
Coupling of nonconforming finite elements and boundary elements II: A posteriori estimates and adaptive mesh-refinement
The coupling of nonconforming finite element and boundary element methods was established in Part I of this paper, where quasi-optimal a priori error estimates are provided. In the second part, we establish sharp a posteriori error estimates and so justify adaptive mesh-refining algorithms for the efficient numerical treatment of transmission problems with the Laplacian in unbounded domains.
[ "nonconforming finite elements", "a posteriori error estimates", "coupling of finite elements and boundary elements", "adaptive algorithms" ]
[ "P", "P", "R", "R" ]
fiQQMaa
Gene expression data analysis with the clustering method based on an improved quantum-behaved Particle Swarm Optimization
Microarray technology has been widely applied in study of measuring gene expression levels for thousands of genes simultaneously. In this technology, gene cluster analysis is useful for discovering the function of gene because co-expressed genes are likely to share the same biological function. Many clustering algorithms have been used in the field of gene clustering. This paper proposes a new scheme for clustering gene expression datasets based on a modified version of Quantum-behaved Particle Swarm Optimization (QPSO) algorithm, known as the Multi-Elitist QPSO (MEQPSO) model. The proposed clustering method also employs a one-step K-means operator to effectively accelerate the convergence speed of the algorithm. The MEQPSO algorithm is tested and compared with some other recently proposed PSO and QPSO variants on a suite of benchmark functions. Based on the computer simulations, some empirical guidelines have been provided for selecting the suitable parameters of MEQPSO clustering. The performance of MEQPSO clustering algorithm has been extensively compared with several optimization-based algorithms and classical clustering algorithms over several artificial and real gene expression datasets. Our results indicate that MEQPSO clustering algorithm is a promising technique and can be widely used for gene clustering.
[ "gene expression data", "clustering", "quantum-behaved particle swarm optimization (qpso)", "particle swarm optimization (pso)" ]
[ "P", "P", "P", "R" ]
:i-BNSL
templated recursive image composition
With the proliferation of image acquisition and consumption, there is an increasing need for solutions that help ordinary people create high quality image composites. In most solutions today, image layouts are provided as fixed templates, which offer the potential of visually diverse layout sets. However, the layout choices are limited to those selected in advance by the template designer; and the library may not support a particular image count, aspect ratio set or spatial distribution. To ameliorate these shortcomings, we propose an image layout framework called Templated Recursive Image Composition. TRIC is template-based in that every layout is based on a template specification. However, TRIC is also generative in that virtually any image set can be accommodated as long as there is at least one image for every region in the template specification. Constraints ensure respect for image aspect ratios; for spacing in the layout interior; and for proportions and placement of sublayouts corresponding to regions in the template specification. We present a description of TRIC, results that demonstrate its versatility, and a user study that supports its acceptability.
[ "template", "image layout", "automatic layout", "collage" ]
[ "P", "P", "M", "U" ]