abstract
stringlengths
8
9.19k
authors
stringlengths
9
1.96k
title
stringlengths
8
367
__index_level_0__
int64
13
1,000k
We describe several new algorithms for particle flow using non-zero diffusion with optimal Q as well as suboptimal Q. The purpose of using optimal Q is two fold: improve estimation accuracy of the state vector and improve the accuracy of uncertainty quantification. Q is the covariance matrix of the diffusion for particle flow corresponding to Bayes' rule.
['Fred Daum']
Seven dubious methods to compute optimal Q for Bayesian stochastic particle flow
879,277
(MATH) In the bounded-storage model for information-theoretically secure encryption and key-agreement one can prove the security of a cipher based on the sole assumption that the adversary's storage capacity is bounded, say by s bits, even if her computational power is unlimited. Assume that a random t -bit string R is either publicly available (e.g. the signal of a deep space radio source) or broadcast by one of the legitimate parties. If s $xi; t , the adversary can store only partial information about R . The legitimate sender Alice and receiver Bob, sharing a short secret key K initially, can therefore potentially generate a very long n -bit one-time pad X with n »| K | about which the adversary has essentially no information, thus at first glance apparently contradicting Shannon's bound on the key size of a perfect cipher.All previous results in the bounded-storage model were partial or far from optimal, for one of the following reasons: either the secret key K had in fact to be longer than the derived one-time pad, or t had to be extremely large ( t ρ ns ), or the adversary was assumed to be able to store only actual bits of R rather than arbitrary s bits of information about R , or the adversary could obtain a non-negligible amount of information about X .In this paper we prove the first non-restricted security result in the bounded-storage model, exploiting the full potential of the model: K is short, X is very long (e.g. gigabytes), t needs to be only moderately larger than s , and the security proof is optimally strong. In fact, we prove that s/t can be arbitrarily close to 1 and hence the storage bound is essentially optimal.
['Stefan Dziembowski', 'Ueli Maurer']
Tight security proofs for the bounded-storage model
279,422
Effects of topic progression in interactive video retrieval experimentation.
['Dan Albertson']
Effects of topic progression in interactive video retrieval experimentation.
739,052
People counting is a challenging task with many applications. We propose a method with a fixed stereo camera that is based on projecting a template onto the depth image. The method was tested on a challenging outdoor dataset with good results and runs in real time.
['Tim van Oosterhout', 'Ben J. A. Kröse', 'Gwenn Englebienne']
People counting with stereo cameras: two template-based solutions
798,458
Article history: Las Palmeras Molecular Dynamics (LPMD) is a highly modular and extensible molecular dynamics (MD) code using interatomic potential functions. LPMD is able to perform equilibrium MD simulations of bulk crystalline solids, amorphous solids and liquids, as well as non-equilibrium MD (NEMD) simulations such as shock wave propagation, projectile impacts, cluster collisions, shearing, deformation under load, heat conduction, heterogeneous melting, among others, which involve unusual MD features like non-moving atoms and walls, unstoppable atoms with constant-velocity, and external forces like electric fields. LPMD is written in C++ as a compromise between efficiency and clarity of design, and its architecture is based on separate components or plug-ins, implemented as modules which are loaded on demand at runtime. The advantage of this architecture is the ability to completely link together the desired components involved in the simulation in different ways at runtime, using a user-friendly control file language which describes the simulation work-flow. As an added bonus, the plug-in API (Application Programming Interface) makes it possible to use the LPMD components to analyze data coming from other simulation packages, convert between input file formats, apply different transformations to saved MD atomic trajectories, and visualize dynamical processes either in real-time or as a post-processing step. Individual components, such as a new potential function, a new integrator, a new file format, new properties to calculate, new real-time visualizers, and even a new algorithm for handling neighbor lists can be easily coded, compiled and tested within LPMD by virtue of its object-oriented API, without the need to modify the rest of the code. LPMD includes already several pair potential functions such as Lennard-Jones, Morse, Buckingham, MCY and the harmonic potential, as well as embedded-atom model (EAM) functions such as the Sutton-Chen and Gupta potentials. Integrators to choose include Euler (if only for demonstration purposes), Verlet and Velocity Verlet, Leapfrog and Beeman, among others. Electrostatic forces are treated as another potential function, by default using the plug-in implementing the Ewald summation method.
['Sergio Davis', 'Claudia Loyola', 'González Fe', 'Joaquín Peralta']
Las Palmeras Molecular Dynamics: A flexible and modular molecular dynamics code ✩
543,319
Free-space optical communication between satellites networked together can enable a high data rate between the satellites. Coherence multiplexing (CM) is an attractive technique for satellite networking due to its ability to cope with the asynchronous nature of communication traffic and the dynamic changes taking place in the satellite constellation. The use of optical radiation for intersatellite links creates very narrow beam divergence angles. Due to the narrow beam divergence angle, the vibration of the pointing system, the movement of the satellite, and the large distance between them the pointing from one satellite to another is a complicated task. The vibration of the pointing system is caused by two stochastic fundamental mechanisms: 1) tracking noises created by the electrooptic tracker and 2) vibrations created by internal satellite mechanical mechanisms and external environments. We derive mathematical models of signal, noise, the approximate signal-to-noise ratio, and the approximate bit-error rates of optical communication satellite networks as functions of the system's parameters, the number of satellites, and the vibration amplitude for frequency-shift keying coherence multiplexing (FSK-CM). Based on these models, we can calculate the negative impact of both the number of satellites and the optical terminal vibration on the system's performance.
['G. Kats', 'S. Arnon']
Analysis of optical coherence multiplexing networks for satellite communication
369,545
This chapter presents the non-dynamic part of a formal framework for teamwork in multi-agent systems. The framework consists of both a static part , defining collective motivational attitudes in such a way that the system developer can adapt them to the circumstances, and a dynamic part monitoring the changes in team attitudes during the course of cooperative problem solving (CPS).#R##N##R##N#In the first part of this chapter, the notion of collective intention in teams of agents is investigated. Starting from individual intentions , goals , and beliefs defining agents' local attitudes, we arrive at an understanding of collective intention in cooperative teams as a rather strong concept: it implies that all members intend for all others to share that intention. This way a team is glued together by collective intention, and exists as long as this attitude holds, after which it may disintegrate.#R##N##R##N#Collective intentions are formalized in a multi-modal logical framework. Together with individual and common knowledge and/or belief, collective intention constitutes a basis for preparing a plan, reflected in the strongest attitude, i.e., in collective commitment , defined and investigated in the next part. Distinct versions of collective commitments that are applicable in various situations, differ with respect to the aspects of teamwork of which the agents involved are aware, and the kind of awareness present within a team. This way a kind of tuning mechanism is provided for the system developer to tune a version of collective commitment fitting the circumstances. Finally, a few exemplar versions of collective commitment resulting from instantiating the general tuning scheme are presented.
['Barbara Dunin-Keplicz', 'Rineke Verbrugge']
A logical view on teamwork
8,890
Patients' waiting times in healthcare services are often long and unacceptable, and place great stress on clinic staff. This paper describes the development and use of a detailed simulation model of an Ear, Nose, and Throat (ENT) outpatient clinic at the University of Illinois Medical Center. Arena simulation software combined with Arena Visual Basic for Applications (VBA) are used to model the ENT clinic and their appointment system. VBA allows various appointment schedules to be examined and the interaction between the appointment system and patient waiting times to be evaluated. It has been shown that an alternative appointment schedule significantly reduces patient waiting times without a need for extra resources.
['Maryam Haji', 'Houshang Darabi']
A simulation case study: Reducing outpatient waiting time of otolaryngology care services using VBA
174,540
The near and shortwave infrared spectral reflectance properties of several mineral substrates impregnated with crude oils (°APIs 19.2, 27.5 and 43.2), diesel, gasoline and ethanol were measured and assembled in a spectral library. These data were examined using Principal Component Analysis (PCA) and Partial Least Squares (PLS) Regression. Unique and characteristic absorption features were identified in the mixtures, besides variations of the spectral signatures related to the compositional difference of the crude oils and fuels. These features were used for qualitative and quantitative determination of the contaminant impregnated in the substrates. Specific wavelengths, where key absorption bands occur, were used for the individual characterization of oils and fuels. The intensity of these features can be correlated to the abundance of the contaminant in the mixtures. Grain size and composition of the impregnated substrate directly influence the variation of the spectral signatures. PCA models applied to the spectral library proved able to differentiate the type and density of the hydrocarbons. The calibration models generated by PLS are robust, of high quality and can also be used to predict the concentration of oils and fuels in mixtures with mineral substrates. Such data and models are employable as a reference for classifying unknown samples of contaminated substrates. The results of this study have important implications for onshore exploration and environmental monitoring of oil and fuels leaks using proximal and far range multispectral, hyperspectral and ultraespectral remote sensing.
['Rebecca Del’Papa Moreira Scafutto', 'Carlos Roberto de Souza Filho']
Quantitative characterization of crude oils and fuels in mineral substrates using reflectance spectroscopy: Implications for remote sensing
702,825
For improved spectrum utilization, the key technique for acquiring spectrum situational awareness (SSA) -- spectrum sensing -- is greatly improved by cooperation among the active spectrum users, as network size increases. However, the many cooperative spectrum sensing (CSS) schemes that have been proposed are based on the assumptions of accurate noise power estimates, characterizable variation in noise level and absence of false or malicious users. As part of a series of SSA research projects, in this research work, we propose a novel scheme for minimizing the effects of noise power estimation error (NPEE) and received signal power falsification (RSPF) by energy-based reliability evaluation. The scheme adopts the Voting rule for fusing multiple spectrum sensing data. Based on simulation results, the proposed scheme yields significant improvement, 68.2 - 88.8%, over the conventional CSS schemes, when compared on the basis of the schemes' stability to uncertainties in noise and signal power.
['Oladiran G. Olaleye', 'Muhammad A. Iqbal', 'Ahmed Aly', 'Dmitri Perkins', 'Magdy A. Bayoumi']
An Energy-Detection-Based Cooperative Spectrum Sensing Scheme for Minimizing the Effects of NPEE and RSPF
933,859
Automatic assessment of the perceptual quality of digital image is an important and challenging issue in computer vision. Although human visual system (HVS) is sensitive to degradations on spatial structures, most of the existing methods do not take into account the spatial distribution of local structures. This paper reports a novel approach coined high-order local derivative pattern (LDP) based metric (HOLDPM). In particular, HOLDPM extracts local image structures with LDPs in multi-directions to yield an accurate assessment of image quality. HOLDPM is extensively evaluated on three large-scale public databases. Experimental results demonstrate that HOLDPM is able to achieve high assessment accuracy. Besides, objective assessment result of the HOLDPM is consistent with the subjective assessment result of the HVS. Specifically, the experimental results also indicate that HOLDPM outperforms most of the state-of-the-art methods in distortion specific tests. Additionally, HOLDPM shows competitive overall performance when measured with the weighted average of Spearman rank-order correlation coefficient (SROCC) and the weighted average of Pearson linear correlation coefficient (PLCC) over the test databases.
['Songlin Du', 'Yaping Yan', 'Yide Ma']
Blind image quality assessment with the histogram sequences of high-order local derivative patterns
722,248
Two techniques for the fabrication of fiber gratings with a narrowband filter response are reported. In the first technique, the reflection bandwidth is reduced by increasing the grating length. Fiber gratings 10 mm long have been made with a bandwidth of approximately 0.3 nm. In the second technique, a phase shift is incorporated into the grating to form a resonant structure. Fiber resonators with resonant dips in reflection and peaks in transmission having full-width half-maximum (FWHM) bandwidths of approximately 0.08 nm have been fabricated. A corresponding narrowband transmission peak was also measured. >
['C.M. Ragdale', 'D.C.J. Reid', 'D. J. Robbins', 'J. Buus', 'Ian Bennion']
Narrowband fiber grating filters
406,162
A new approach for three-dimensional (3-D) reconstruction of building roofs from airborne light detection and ranging (LiDAR) data is proposed, and it includes four steps. Building roof points are first extracted from LiDAR data by using the reversed iterative mathematic morphological (RIMM) algorithm and the density-based method. The corresponding relations between points and rooftop patches are then established through a smoothness strategy involving “seed point selection, patch growth, and patch smoothing.” Layer-connection points are then generated to represent a layer in the horizontal direction and to connect different layers in the vertical direction. Finally, by connecting neighboring layer-connection points, building models are constructed with the second level of detailed data. The key contributions of this approach are the use of layer-connection points and the smoothness strategy for building model reconstruction. Experimental results are analyzed from several aspects, namely, the correctness and completeness, deviation analysis of the reconstructed building roofs, and the influence of elevation to 3-D roof reconstruction. In the two experimental regions used in this paper, the completeness and correctness of the reconstructed rooftop patches were about 90% and 95%, respectively. For the deviation accuracy, the average deviation distance and standard deviation in the best case were 0.05 m and 0.18 m, respectively; and those in the worst case were 0.12 m and 0.25 m. The experimental results demonstrated promising correctness, completeness, and deviation accuracy with satisfactory 3-D building roof models.
['Yongjun Wang', 'Hao Xu', 'Liang Cheng', 'Manchun Li', 'Yajun Wang', 'Nan Xia', 'Yanming Chen', 'Yong Tang']
Three-Dimensional Reconstruction of Building Roofs from Airborne LiDAR Data Based on a Layer Connection and Smoothness Strategy
746,450
In this paper we present Thracker - a low-cost and robust hardware to track hand gestures in front of a screen or small-scale active spaces like public displays or posters. Thracker uses capacitive sensing for tracking user input. Thracker allows for entire new interaction modes like picking and dropping an object on the screen with the hand. We present a fully working prototype and a short user study to confirm our findings.
['Raphael Wimmer', 'Paul Holleis', 'Matthias Kranz', 'A. Schmidt']
Thracker - Using Capacitive Sensing for Gesture Recognition
444,486
Spectrum sensing schemes for dynamic primary user signal under AWGN and rayleigh fading channels
['Wasan Kadhim Saad', 'Mahamod Ismail', 'Rosdiadee Nordin', 'Ayman A. El-Saleh']
Spectrum sensing schemes for dynamic primary user signal under AWGN and rayleigh fading channels
856,513
Motivated Learning for Goal Selection in Goal Nets.
['Huiliang Zhang', 'Zhiqi Shen', 'Chunyan Miao', 'Xudong Luo']
Motivated Learning for Goal Selection in Goal Nets.
851,740
Adaptive routing is widely regarded as a promising approach to improving interconnection network performance. Many designers of adaptive routing algorithms have used synthetic communication patterns, such as uniform and transpose traffic, to compare the performance of various adaptive routing algorithms with each other and with oblivious routing. These comparisons have shown that the average message latency is usually lower with adaptive routing. On the other hand, when a parallel program is executed on a multiprocessor the goal is to reduce the total execution time. We explain why improving the average message latency of a routing algorithm does not necessarily lead to a lower execution time for real applications. We support this observation by reporting simulation results for both adaptive and oblivious routing using communication derived from real applications. Specifically, we report the performance of various routing algorithms for directed acyclic graphs (DAGs) derived from the Cholesky factorization of sparse matrices. Our results show that there is little correlation between average message latency and the total execution time of a parallel program. Hence, average message latency does not seem to be a useful measure of the performance of a routing algorithm. This strongly suggests that current comparisons of routing algorithms do not provide a reliable indication of the performance improvements to be realized by executing programs on a multiprocessor with such a routing algorithm. We interpret these results and suggest several alternatives for further research.
['Loren Schwiebert', 'D. N. Jayasimha']
On measuring the performance of adaptive wormhole routing
69,639
An analysis of the destabilizing effect of daisy chained rate-limited actuators
['Kelly D. Hammett', 'Jordan M. Berg', 'Carla A. Schwartz', 'Siva S. Banda']
An analysis of the destabilizing effect of daisy chained rate-limited actuators
705,364
We present a new system for integrating evolvutionary processes with live coding. The system is built upon an existing platform called Extramuros, which facilitates network-based collaboration on live coding performances. Our evolutionary approach uses the Tidal live coding language within this platform. The system uses a grammar to parse code patterns and create random mutations that conform to the grammar, thus guaranteeing that the resulting pattern has the correct syntax. With these mutations available, we provide a facility to integrate them during a live performance. To achieve this, we added controls to the Extramuros web client that allows coders to select patterns for submission to the Tidal interpreter. The fitness of the pattern is updated implicitly by the way the coder uses the patterns. In this way, appropriate patterns are continuously generated and selected for throughout a performance. We present examples of performances, and discuss the utility of this approach in live coding music.
['Simon J. Hickinbotham', 'Susan Stepney']
Augmenting Live Coding with Evolved Patterns
724,484
Smart meters are being deployed worldwide on a trial basis and are expected to enable remote reading and demand response among other advanced functions, by establishing a two-way communication network. However, it remains to be determined as to how these meters would transmit their data to an aggregation point. Our paper employs a medium access control (MAC) protocol for a cooperative network to address this problem. The said protocol has been developed on the GNU Radio platform, while the test-bed consists of universal software radio peripherals (USRPs) that serve as network nodes. The performance of the system has then been evaluated and compared with that of fully cooperative and single-input-single-output (SISO) networks. It has been shown that the proposed system outperforms the SISO network in terms packet error rates and throughput.
['Saad Amin', 'Sohaib Ashraf', 'Mohammad Shahzeb Faisal', 'Muhammad Shahmeer Omar', 'Syed Ahsan Raza Naqvi', 'Syed Ali Hassan', 'Muhammad Usman Ilyas']
Implementation and Evaluation of a Cooperative MAC Protocol for Smart Data Acquisition
828,538
This article presents the results of a pilot dose survey including fifty patients who underwent combined screening: full field digital mammography FFDM plus digital breast tomosynthesis DBT. The study also aimed to demonstrate the different dosimetric outcome from using different glandularity assumptions and dosimetry methods. The mean glandular dose to each patient was computed using Dance's method with UK glandularity assumption. The calculations were repeated using Wu/Boone's method with the "50---50" breast assumption and the results compared to those using Dance's method. For the typical breasts, the dose from combined examination was around 9.56i¾?mGy: 4.26i¾?mGy from two-view FFDM and 5.30i¾?mGy from two-view DBT. Adopting UK glandularity assumption was believed to more realistically reflect the population dose. The comparison between Dance's and Wu/Boone's methods indicated that the latter tended to show lower dose values with mean differences of -3.6i¾?% for FFDM and -5.5i¾?% for DBT.
['Jason Tse', 'Roger Fulton', 'Mary Rickard', 'Patrick C. Brennan', 'Donald McLean']
A Pilot Study on Radiation Dose from Combined Mammography Screening in Australia
839,832
Fabrication of Straight Stainless-Steel Micro-Coils for the Use of Biodevice Components.
['Toshiyuki Horiuchi', 'Hiroshi Sakabe', 'Takao Yuzawa', 'Daichi Yamamoto']
Fabrication of Straight Stainless-Steel Micro-Coils for the Use of Biodevice Components.
740,008
By extending the basic idea of FAST it is possible to create a detector that can operate directly on a RGB color image and on it detect a higher number of features. The higher number of points and the simplification of the metric put us in the position to impose an additional condition on the keypoints. In order to do so we picked up a problem of FAST and defined a condition for similar points. This additional condition enables results that are better than FAST with regard to stability. Furthermore this condition enables a fast implementation, so that with the triple amount of input data due to the color channels less than threefold of time is required.
['C. Robert Pech', 'Jens Vial', 'Andreas Zell']
cFAST: A fast color feature detector
577,500
Embodied agents present ongoing challenging agenda for research in multi-modal user interfaces and human-computer-interaction. Such agent metaphors will only be widely applicable to online applications when there is a standardised way to map underlying engines with the visual presentation of the agents. This paper delineates the functions and specifications of a mark-up language for scripting the animation of virtual characters. The language is called: Character Mark-up Language (CML) and is an XML-based character attribute definition and animation scripting language designed to aid in the rapid incorporation of lifelike characters/agents into online applications or virtual reality worlds. This multi-modal scripting language is designed to be easily understandable by human animators and easily generated by a software process such as software agents. CML is constructed based jointly on motion and multi-modal capabilities of virtual life-like figures. The paper further illustrates the constructs of the language and describes a real-time execution architecture that demonstrates the use of such a language as a 4G language to easily utilise and integrate MPEG-4 media objects in online interfaces and virtual environments.
['Yasmine Arafa', 'Abe Mamdani']
Multi-modal embodied agents scripting
378,076
Automatic System for Zebrafish Counting in Fish Facility Tanks
['Francisco J. Silvério', 'Ana C. Certal', 'Carlos Mão de Ferro', 'Joana F. Monteiro', 'José Fernando A. Cruz', 'Ricardo Ribeiro', 'João Nuno Silva']
Automatic System for Zebrafish Counting in Fish Facility Tanks
859,072
Basic Study on Self-Transfer Aid Robotics.
['Yoshihiko Takahashi', 'Go Manabe', 'Katsumi Takahashi', 'Takuro Hatakeyama']
Basic Study on Self-Transfer Aid Robotics.
988,039
After a program has crashed and terminated abnormally, it typically leaves behind a snapshot of its crashing state in the form of a core dump. While a core dump carries a large amount of information, which has long been used for software debugging, it barely serves as informative debugging aids in locating software faults, particularly memory corruption vulnerabilities. A memory corruption vulnerability is a special type of software faults that an attacker can exploit to manipulate the content at a certain memory. As such, a core dump may contain a certain amount of corrupted data, which increases the difficulty in identifying useful debugging information (e.g. , a crash point and stack traces). Without a proper mechanism to deal with this problem, a core dump can be practically useless for software failure diagnosis. In this work, we develop CREDAL, an automatic tool that employs the source code of a crashing program to enhance core dump analysis and turns a core dump to an informative aid in tracking down memory corruption vulnerabilities. Specifically, CREDAL systematically analyzes a core dump potentially corrupted and identifies the crash point and stack frames. For a core dump carrying corrupted data, it goes beyond the crash point and stack trace. In particular, CREDAL further pinpoints the variables holding corrupted data using the source code of the crashing program along with the stack frames. To assist software developers (or security analysts) in tracking down a memory corruption vulnerability, CREDAL also performs analysis and highlights the code fragments corresponding to data corruption. To demonstrate the utility of CREDAL, we use it to analyze 80 crashes corresponding to 73 memory corruption vulnerabilities archived in Offensive Security Exploit Database. We show that, CREDAL can accurately pinpoint the crash point and (fully or partially) restore a stack trace even though a crashing program stack carries corrupted data. In addition, we demonstrate CREDAL can potentially reduce the manual effort of finding the code fragment that is likely to contain memory corruption vulnerabilities.
['Jun Xu', 'Dongliang Mu', 'Ping Chen', 'Xinyu Xing', 'Pei Wang', 'Peng Liu']
CREDAL: Towards Locating a Memory Corruption Vulnerability with Your Core Dump
917,910
Objective: Motivation is a driving force in human‐ technology interaction. This paper represents an effort to (a) describe a theoretical model of motivation in human technology interaction, (b) provide design principles and guidelines based on this theory, and (c) describe a sequence of steps for the evaluation of motivational factors in human‐technology interaction. Background: Motivation theory has been relatively neglected in human factors/ergonomics (HF/E). In both research and practice, the (implicit) assumption has been that the operator is already motivated or that motivation is an organizational concern and beyond the purview of HF/E. However, technology can induce task-related boredom (e.g., automation) that can be stressful and also increase system vulnerability to performance failures. Method: A theoretical model of motivation in human‐technology interaction is proposed, based on extension of the self-determination theory of motivation to HF/E. This model provides the basis for both future research and for development of practical recommendations for design. Results: General principles and guidelines for motivational design are described as well as a sequence of steps for the design process. Conclusion: Human motivation is an important concern for HF/E research and practice. Procedures in the design of both simple and complex technologies can, and should, include the evaluation of motivational characteristics of the task, interface, or system. In addition, researchers should investigate these factors in specific human‐technology domains. Application: The theory, principles, and guidelines described here can be incorporated into existing techniques for task analysis and for interface and system design.
['James L. Szalma']
On the Application of Motivation Theory to Human Factors/Ergonomics Motivational Design Principles for Human–Technology Interaction
82,069
Topology-representing networks (TRNs) generate reduced models of biomolecules and thereby facilitate the 5tting of molecular fragments into large macromolecular complexes. The components of such complexes undergo a wide range of motions, and shapes observed at low resolution often deviate from the known atomic structures. What is required for the modeling of such motions is a combination of global shape constraints based on the low-resolution data with a local modeling of atomic interactions. We present a novel Motion Capture Network that freezes inessential degrees of freedom to maintain the stereochemistryof an atomic model. TRN-based deformable models retain much of the mechanical properties of biological macromolecules. The elastic models yield a decomposition of the predicted motion into vibrational normal modes and are amenable to interactive manipulation with haptic rendering software. c
['Willy Wriggers', 'Pablo Chacón', 'Julio A. Kovacs', 'Florence Tama', 'Stefan Birmanns']
Topology representing neural networks reconcile biomolecular shape, structure, and dynamics
371,659
This paper aims to characterize whether a multi-layer cellular neural network is of deep architecture; namely, when can an n -layer cellular neural network be replaced by an m -layer cellular neural network for m < n yet still preserve the same output phenomena? From a mathematical point of view, such characterization involves investigating whether the topological structure of two (or multiple) layers is conjugate. A decision procedure that addresses the necessary and sufficient condition for the topological conjugacy between two layers in a network is revealed.
['Jung-Chao Ban', 'Chih-Hung Chang']
When are two multi-layer cellular neural networks the same?
698,759
This research aims to identify the factors that drive an employee to comply with requirements of the Information Security Policy (ISP) with regard to protecting her organization’s information and technology resources. Two different research models are proposed for an employee’s individual based beliefs and organization based beliefs. An employee’s attitude is traced to its underlying foundational beliefs in each model, namely, benefit of compliance, cost of non-compliance, and cost of compliance, which are beliefs that represent the perceived effects of compliance or non-compliance. It is also postulated that these beliefs along with an employee’s attitude are affected by her Information Security Awareness (ISA). Besides the structural model testing of individual and organizational models of compliance, the moderating role of an employee’s work experience is investigated. Our results show that, while individual benefit of compliance and cost of compliance are not significant in the low experience group, all individual based beliefs are significant in the high experience group. Similarly, organizational benefit of compliance is not significant in the low experience group, while all organization based beliefs are significant in the high experience group. Furthermore, ISA is found to affect an employee’s attitude and all her individual and organization based beliefs. As organizations strive to get their employees to follow their information security rules and regulations, our study mainly sheds light on the moderating role of an employee’s work experience in changing the strength of individual and organization based beliefs on employees’ attitude as well as her ISA.
['Burcu Bulgurcu', 'Hasan Cavusoglu', 'Izak Benbasat']
Effects of Individual and Organization Based Beliefs and the Moderating Role of Work Experience on Insiders' Good Security Behaviors
247,137
This paper presents a new topology for current-mode Wheatstone bridge (CMWB) that utilizes an Operational Floating Current Conveyor (OFCC) as a basic building block. The proposed CMWB has been analyzed, simulated, implemented and experimentally tested. The experimental results verify that the proposed CMWB out performs existing CMWBs in terms of accuracy.
['Yehya H. Ghallab', 'Wael M. Badawy']
A New Design of a Current-mode Wheatstone Bridge Using Operational Floating Current Conveyor
909,226
Technical Perspective: Catching lies (and mistakes) in offloaded computation
['Michael Mitzenmacher', 'Justin Thaler']
Technical Perspective: Catching lies (and mistakes) in offloaded computation
611,152
Biofeedback is an emerging technology being used as a legitimate medical technique for several medical issues such as heart problems, pain, stress, depression, among others. This paper introduces the Multi-Modal Intelligent System for Biofeedback Interactions (MMISBI), an interactive and intelligent biofeedback system using an interactive mirror to facilitate and enhance the user's awareness of various physiological functions using biomedical sensors in real-time. The system comprises different biofeedback sensors that collect physiological features; the system also provides intuitive, intelligent, and adaptive user interfaces that promote a natural communication between the user and the biofeedback system. The Ambient Intelligence (AmI) technology is incorporated in the system to provide means for biofeedback responses. The proposed conceptual system is been evaluated by 15 subjects and the results are very stimulating. Ninety percent (90%) of the subjects confirmed that the system is beneficial, deployable, and affordable for personal use. On the other hand, 30% of the subjects have indicated that privacy is the resisting issue for the wide deployment of the system.
['Mohammed F. Alhamid', 'Mohamad Eid', 'Abdulmotaleb El Saddik']
A multi-modal intelligent system for biofeedback interactions
141,750
This paper explores user views on the way in which interactive e-branding techniques are perceived. A survey consisting of 100 respondents was contacted to address the questions relating to the topic. It was found that current techniques may improve the effectiveness, efficiency and user satisfaction. The paper discusses the findings and concludes with recommendations for further work to improve the overall user experience through interactivity. Virtual shopping assistance was also identified as a factor that can aid users further to resolve problems during their online engagement.
['Dimitrios Rigas', 'Hammad Hussain']
Interactive e-Branding in e-Commerce Interfaces: Survey Results and Implications
855,558
For original paper by L. U. Gokdere et al. see ibid.(vol.48, p870-72, Aug. 2001) Contrary to the claims made in the comments to our paper, by R. T. Novotnak and J. Chiasson see ibid.(vol.50, p820-21, Aug. 2003), the passivity-based controller developed for induction motors has already been tested on the same demanding trajectories used for the input-output linearization controller. The experimental results show that the passivity-based controller provides closer tracking of the same mechanical trajectory, when compared with the input-output linearization controller.
['Levent U. Gokdere', 'Marwan A. Simaan', 'Charles W. Brice']
Response to comments on "passivity-based control of saturated induction motors"
38,795
An EXPTIME Algorithm for Data-Aware Service Simulation Using Parametrized Automata
['Walid Belkhir', 'Yannick Chevalier', 'Michaël Rusinowitch']
An EXPTIME Algorithm for Data-Aware Service Simulation Using Parametrized Automata
688,686
Conceptual models are applied as the first step in software design methodologies for collecting the semantics involved in the universe of discourse. Nevertheless, the abstraction process creates some misunderstandings for novice designers, such as difficulties in modeling some constructs and in understanding the semantics that they represent. This paper presents a thorough study of errors detected among Database Design students in Computer Science Engineering when they apply the abstraction process to generate a conceptual schema using a specific model. Specifically, the paper focuses on errors made in the design of ternary relationships. Some heuristics are proposed in order to help novice designers avoid these common errors, and an experimental study is presented to compare the number of errors made by the students before and after applying these heuristics.
['Dolores Cuadra', 'Ana María Iglesias Maqueda', 'Elena Castro', 'Paloma Martínez Fernández']
Educational Experiences Detecting, Using, and Representing Ternary Relationships in Database Design
343,318
This paper introduces the concept of proactive execution of robot tasks in the context of human-robot cooperation with uncertain knowledge of the human's intentions. We present a system architecture that defines the necessary modules of the robot and their interactions with each other. The two key modules are the intention recognition that determines the human user's intentions and the planner that executes the appropriate tasks based on those intentions. We show how planning conflicts due to the uncertainty of the intention information are resolved by proactive execution of the corresponding task that optimally reduces the system's uncertainly. Finally, we present an algorithm for selecting this task and suggest a benchmark scenario.
['Oliver C. Schrempf', 'Uwe D. Hanebeck', 'Andreas J. Schmid', 'Heinz Wörn']
A novel approach to proactive human-robot cooperation
274,704
Future SOC devices make extensive use of phase locked loops to either generate gigahertz clocks on-chip or to adjust the phase of data signals in high speed IO links running at multiple gigabits per second. The high speed analog nature of the circuitry requires a dedicated test strategy to obtain fault coverage particularly for parametric defects affecting jitter performance. While traditional specification oriented test methods require a complex setup of additional instrumentation, This work describes a completely new model based approach using existing capture and compare equipment available with ATE. The methodology proposed in This work performs a test by verifying the frequency domain model of the phase regulation characteristic developed during the design phase of the circuit. The method scales in performance and accuracy with leading edge measurement equipment such as ATE and BERT.
['Bernd Laquai']
A model-based test approach for testing high speed PLLs and phase regulation circuitry in SOC devices
358,631
Group Key Agreement.
['Mike Burmester']
Group Key Agreement.
752,775
Research on Collision Risk Model in Free Flight Based on Position Error
['Zhaoning Zhang', 'Ruijun Shi']
Research on Collision Risk Model in Free Flight Based on Position Error
849,842
An arbitrary-input pulsewidth control loop (AIPWCL) based on a delay-locked loop with duty cycle corrector is presented. The duty cycles of the clock signals can be adjusted from 10% to 90% in 10% steps. The proposed AIPWCL is designed and simulated by using tsmc 0.13µm CMOS process. The operation frequency range is from 770MHz to 1.05GHz. The locking time of AIPWCL is less than 40ns within the operation frequency band. The power dissipation is 4.38mW at 1.2V voltage supply. The peak-to-peak jitter is less than 1ps at an input clock frequency of 1GHz while adjusting various duty cycles.
['Ro-Min Weng', 'Yun-Chih Lu', 'Chun-Yu Liu']
A low jitter arbitrary-input pulsewidth control loop with wide duty cycle adjustment
299,520
Colorectal cancer, the second leading cause of cancer deaths in the United States, is a disease for which there are no known biomarkers of risk that can be used for predicting and preventing the disease. Based on new knowledge of the molecular basis of colorectal cancer, we developed and validated a panel of biomarkers of risk that can be measured in rectal biopsies. The goal of this work is to develop an integrated detection and image analysis quantification system for measuring and applying these biomarkers in clinical research and care. More importantly, the new system can process biopsy images from both traditional and bionanotechnology quantum dot-based IHC, and through a combination of novel and automated image analysis and quantification algorithms, it will significantly reduce processing time by detecting multiple biomarkers simultaneously on the same histologic sections. Clinical application of this novel process of detecting and quantifying biomarkers, coupled with decision support from the analysis of a biomarker quantification database, is expected to open new frontiers in the field of colorectal cancer prognosis and treatment.
['Qaiser Chaudry', 'Koon Yin Kong', 'Thomas U. Ahearn', 'Vaunita Cohen', 'Roberd M. Bostick', 'May D. Wang']
AN INTEGRATED IMAGE QUANTIFICATION SYSTEM FOR COLORECTAL CANCER RISK ASSESSMENT USING QUANTUM DOTS AND MOLECULAR PROFILING
542,327
The maximum traffic load that can be supported by a wavelength division multiplexed (WDM) optical burst switched (OBS) network with dynamic wavelength allocation is studied. It is shown that it depends on the requirements of the class of service and on the efficiency of the dynamic routing and wavelength assignment (RWA) algorithm employed. Two methods to build the bursts are presented as well as their influence on the maximum traffic load that can be supported.
['Ignacio de Miguel', 'Michael Düser', 'Polina Bayvel']
Traffic Load Bounds for Optical Burst-Switched Networks with Dynamic Wavelength Allocation
198,011
Test-case prioritization, proposed at the end of last century, aims to schedule the execution order of test cases so as to improve test effectiveness. In the past years, test-case prioritization has gained much attention, and has significant achievements in five aspects: prioritization algorithms, coverage criteria, measurement, practical concerns involved, and application scenarios. In this article, we will first review the achievements of test-case prioritization from these five aspects and then give our perspectives on its challenges.
['Dan Hao', 'Lu Zhang', 'Hong Mei']
Test-case prioritization: achievements and challenges
822,110
This paper proposes a face hallucination method for the reconstruction of high-resolution facial images from single-frame, low-resolution facial images. The proposed method has been derived from example-based hallucination methods and morphable face models. First, we propose a recursive error back-projection method to compensate for residual errors, and a region-based reconstruction method to preserve characteristics of local facial regions. Then, we define an extended morphable face model, in which an extended face is composed of the interpolated high-resolution face from a given low-resolution face, and its original high-resolution equivalent. Then, the extended face is separated into an extended shape and an extended texture. We performed various hallucination experiments using the MPI, XM2VTS, and KF databases, compared the reconstruction errors, structural similarity index, and recognition rates, and showed the effects of face detection errors and shape estimation errors. The encouraging results demonstrate that the proposed methods can improve the performance of face recognition systems. Especially the proposed method can enhance the resolution of single-frame, low-resolution facial images.
['Jeong-Seon Park', 'Seong-Whan Lee']
An Example-Based Face Hallucination Method for Single-Frame, Low-Resolution Facial Images
363,885
This paper presents a process-based discrete-event simulation library for construction project planning. Business process models are used to build an accumulative knowledge base for standard construction processes in form of a ready to use process templates. The library aims to reduce the time and efforts needed to create simulation models for a construction project throughout its lifecycle by integrating process models with simulation models and provide a set of reusable simulation components. The paper presents the concepts and describes the architecture of the system with briefly review of its features.
['Raimar J. Scherer', 'Ali Ismail']
Process-based simulation library for construction project planning
506,752
This special issue showcases developments in serious games that can have an impact on future research, development, and application of computer graphics and related techniques. The articles demonstrate the games' rich potential, spanning from health and culture applications to novel interaction techniques and support for 3D player data visualization.
['Tiffany Barnes', 'L. Miguel Encarnação', 'Christopher D. Shaw']
Serious Games
664,586
Cocyclic Simplex Codes of Type alpha Over mmb Z4 and mmb Z2s
['Nimalsiri Pinnawala', 'Asha Rao']
Cocyclic Simplex Codes of Type alpha Over mmb Z4 and mmb Z2s
630,042
We discuss applications of rewriting in three different areas: design and analysis of algorithms, theorem proving and term rewriting, and modeling and analysis of biological processes.
['Ashish Tiwari']
Rewriting in Practice
251,365
A Modeling Language for Program Design and Synthesis.
['Don S. Batory']
A Modeling Language for Program Design and Synthesis.
799,420
Does the Content of Speech Influence its Perceived Sound Quality
['Alexander Raake']
Does the Content of Speech Influence its Perceived Sound Quality
775,283
Results to Web search queries are ranked using heuristics that typically analyze the global link topology, user behavior, and content relevance. We point to a particular inefficiency of such methods: information redundancy. In queries where learning about a subject is an objective, modern search engines return relatively unsatisfactory results as they consider the query coverage by each page individually, not a set of pages as a whole. We address this problem using essential pages. If we denote as $\mathbb{S}_Q$ the total knowledge that exists on the Web about a given query $Q$, we want to build a search engine that returns a set of essential pages $E_Q$ that maximizes the information covered over $\mathbb{S}_Q$. We present a preliminary prototype that optimizes the selection of essential pages; we draw some informal comparisons with respect to existing search engines; and finally, we evaluate our prototype using a blind-test user study.
['Ashwin Swaminathan', 'Cherian V. Mathew', 'Darko Kirovski']
Essential Pages
666,904
Specific-subject oriented information collection is one of the key technologies of vertical search engines, which directly affects the speed and relevance of search results. The topic information collection algorithm is widely used for its accuracy. The Hidden Markov Model (HMM) is used to learn and judge the relevance between the Uniform Resource Locator (URL) and the topic information. The Rocchio method is used to construct the prototype vectors relevant to the topic information, and the HMM is used to learn the preferred browsing paths. The concept maps including the semantics of the webpage are constructed and the web's link structures can be decided. The validity of the algorithm is proved by the experiment at last. Comparing with the Best-First algorithm, this algorithm can get more information pages and has higher precision ratio.
['Haiyan Jiang', 'Xingce Wang', 'Zhongke Wu', 'Mingquan Zhou', 'Xuesong Wang']
Topic Information Collection Based on the Hidden Markov Model
214,532
This paper explores the notion of an atomic action as a method of process structuring. This notion, first introduced explicitly by Eswaren et al [6] in the context of data base systems, reduces the problem of coping with many processes to that of coping with a single process within the atomic action . A form of process synchronization, the await statement, is adapted to work naturally with atomic actions. System recovery is also considered and we show how atomic actions can be used to isolate recovery action to a single process. Explicit control of recovery is provided by a reset procedure that permits information from rejected control paths to be passed to subsequent alternative paths.
['David Lomet']
Process structuring, synchronization, and recovery using atomic actions
401,047
This paper concerns the iterative implementation of a knowledge model in a data mining context. Our approach relies on coupling a Bayesian network design with an association rule discovery technique. First, discovered association rule relevancy isenhanced by exploiting the expert knowledge encoded within a Bayesian network, i.e., avoiding to provide trivial rules w.r.t. known dependencies. Moreover, the Bayesian network can be updated thanks to an expert-driven annotation process on computed association rules. Our approach is experimentally validated on the Asia benchmark dataset.
['Clément Fauré', 'Sylvie Delprat', 'Jean-François Boulicaut', 'Alain Mille']
Iterative bayesian network implementation by using annotated association rules
356,596
Recently, there has been a great interest in the modeling and analysis of process industry, and various models are proposed for different uses. It is meaningful to have a model to serve as an analytical aid tool in short-term scheduling for oil refinery process. However, in oil refinery process, there are special constraints and requirements, and the existing models cannot be applied directly. Thus, as an application in this paper, we extend the hybrid Petri net to model the crude-oil operations in oil refinery process. This Petri net is called controlled colored timed Petri net (CCTPN). In this model, a token carries both discrete and continuous properties. A token in a discrete place shows its discrete properties, while the continuous properties are captured when it is in a continuous place. A discrete transition treats a token just as a discrete one, and a continuous transition deals with it as a continuous one. In this way, we integrate the discrete and continuous processes together in the CCTPN. Based on the CCTPN, liveness for CCTPN is defined, and with the liveness definition we show how to detect conflicts in scheduling the system by using this model.
['NaiQi Wu', 'Liping Bai', 'Chengbin Chu']
Modeling and Conflict Detection of Crude Oil Operations for Refinery Process Based on Controlled Colored Timed Petri Net
283,982
Noise PSD Estimation Using Blind Source Separation in a Diffuse Noise Field
['Lin Wang', 'Timo Gerkmann', 'Simon Doclo']
Noise PSD Estimation Using Blind Source Separation in a Diffuse Noise Field
733,892
Many graph clustering algorithms focus on producing a single partition of the vertices in the input graph. Nevertheless, a single partition may not provide sufficient insight about the underlying data. In this context, it would be interesting to explore alternative clustering solutions. Many areas, such as social media marketing demand exploring multiple clustering solutions in social networks to allow for behavior analysis to find, for example, potential customers or influential members according to different perspectives. Additionally, it would be desirable to provide not only multiple clustering solutions, but also to present multiple non-redundant ones, in order to unleash the possible many facets from the underlying dataset. In this paper, we propose RM-CRAG, a novel algorithm to discover the top-k non-redundant clustering solutions in attributed graphs, i.e., a ranking of clusterings that share the least amount of information, in the information theoretic sense. We also propose MVNMI, an evaluation criterion to assess the quality of a set of clusterings. Experimental results using different datasets show the effectiveness of the proposed algorithm.
['Gustavo Paiva Guedes', 'Eduardo S. Ogasawara', 'Eduardo Bezerra', 'Geraldo Xexéo']
Discovering top-k non-redundant clusterings in attributed graphs
810,752
Full textFull text is available as a scanned copy of the original print version. Get a printable copy (PDF file) of the complete article (199K), or click on a page image below to browse page by page. Links to PubMed are also available for Selected References.#R##N##R##N##R##N##R##N##R##N#1007#R##N##R##N##R##N##R##N##R##N##R##N#Selected References#R##N#These references are in PubMed. This may not be the complete list of references from this article. #R##N##R##N#Forsythe DE. Using ethnography to investigate life scientists' information needs. Bull Med Libr Assoc. 1998 Jul;86(3):402–409. [PMC free article] [PubMed]#R##N#Cimino C, Barnett GO. Analysis of physician questions in an ambulatory care setting. Comput Biomed Res. 1992 Aug;25(4):366–373. [PubMed]#R##N#Coble JM, Maffitt JS, Orland MJ, Kahn MG. Contextual inquiry: discovering physicians' true needs. Proc Annu Symp Comput Appl Med Care. 1995:469–473. [PMC free article] [PubMed]#R##N#Tang PC, Jaworski MA, Fellencer CA, LaRosa MP, Lassa JM, Lipsey P, Marquardt WC. Methods for assessing information needs of clinicians in ambulatory care. Proc Annu Symp Comput Appl Med Care. 1995:630–634. [PMC free article] [PubMed]#R##N#Smith R. What clinical information do doctors need? BMJ. 1996 Oct 26;313(7064):1062–1068. [PMC free article] [PubMed]#R##N#Ely JW, Osheroff JA, Ebell MH, Bergus GR, Levy BT, Chambliss ML, Evans ER. Analysis of questions asked by family doctors regarding patient care. BMJ. 1999 Aug 7;319(7206):358–361. [PMC free article] [PubMed]
['Debra Revere', 'Leilani St. Anna', 'Debra S. Ketchell', 'David Kauff', 'Barak Gaster', 'Diane Timberlake']
Using Contextual Inquiry to Inform Design of a Clinical Information Tool.
792,205
This paper presents an efficient method to solve the obstacle-avoiding rectilinear Steiner tree (OARSMT) problem optimally. Our work is developed based on the GeoSteiner approach in which full Steiner trees (FSTs) are first constructed and then combined into a rectilinear Steiner minimum tree (RSMT). We modify and extend the algorithm to allow obstacles in the routing region. For each routing obstacle, we first introduce four virtual terminals located at its four corners. We then give the definition of FSTs with blockages and prove that they will follow some very simple structures. Based on these observations, a two-phase approach is developed for the construction of OARSMTs. In the first phase, we generate a set of FSTs with blockages. In the second phase, the FSTs generated in the first phase are used to construct an OARSMT. Finally, experiments on several benchmarks are conducted. Results show that the proposed method is able to handle problems with hundreds of terminals in the presence of multiple obstacles, generating an optimal solution in a reasonable amount of time.
['Tao Huang', 'Liang Li', 'Evangeline F. Y. Young']
On the Construction of Optimal Obstacle-Avoiding Rectilinear Steiner Minimum Trees
176,399
This paper provides exact analytical expressions for the first and second moments of the true error for linear discriminant analysis (LDA) when the data are univariate and taken from two stochastic Gaussian processes. The key point is that we assume a general setting in which the sample data from each class do not need to be identically distributed or independent within or between classes. We compare the true errors of designed classifiers under the typical i.i.d. model and when the data are correlated, providing exact expressions and demonstrating that, depending on the covariance structure, correlated data can result in classifiers with either greater error or less error than when training with uncorrelated data. The general theory is applied to autoregressive and moving-average models of the first order, and it is demonstrated using real genomic data.
['Amin Zollanvari', 'Jianping Hua', 'Edward R. Dougherty']
Analytical study of performance of linear discriminant analysis in stochastic settings
262,158
The structure and dynamics of a modern business environment are very hard to model using traditional methods. Such complexity raises challenges to effective business analysis and improvement. The importance of applying business process simulation to analyze and improve business activities has been widely recognized. However, one remaining challenge is the development of approaches to human resource behavior simulation. To address this problem, we describe a novel simulation approach where intelligent agents are used to simulate human resources by performing allocated work from a workflow management system. The behavior of the intelligent agents is driven a by state transition mechanism called a Hierarchical Task Network (HTN). We demonstrate and validate our simulator via a medical treatment process case study. Analysis of the simulation results shows that the behavior driven by the HTN is consistent with design of the workflow model. We believe these preliminary results support the development of more sophisticated agent-based human resource simulation systems.
['Hanwen Guo', 'Ross A. Brown', 'Rune K. Rasmussen']
Human resource behaviour simulation in business processes
679,244
We will introduce a method to extract object boundaries from an image. This method utilizes a deformable curve based on the Self Organizing Map algorithm. The proposed SOM has some unique properties such as batch update and neuron insertion/deletion. These properties can make the SOM converge to object concavities as well as maintain a uniform distribution of neurons along the SOM. In comparison with other traditional active contour methods, this algorithm is less sensitive to initialization more flexible in noisy conditions. It outperforms the Gradient Vector Flow method.
['Yu He', 'Songhua Xu', 'Willard L. Miranker']
A Force Field Driven SOM for boundary detection
176,563
The design of a tremor estimator is an important part of designing mechanical tremor suppression orthoses. A number of tremor estimators have been developed and applied with the assumption that tremor is a mono-frequency signal. However, recent experimental studies have shown that Parkinsonian tremor consists of multiple frequencies, and that the second and third harmonics make a large contribution to the tremor. Thus, the current estimators may have limited performance on estimation of the tremor harmonics. In this paper, a high-order tremor estimation algorithm is proposed and compared with its lower-order counterpart and a widely used estimator, the Weighted-frequency Fourier Linear Combiner (WFLC), using 18 Parkinsonian tremor data sets. The results show that the proposed estimator has better performance than its lower-order counterpart and the WFLC. The percentage estimation accuracy of the proposed estimator is 85±2.9%, an average improvement of 13% over the lower-order counterpart. The proposed algorithm holds promise for use in wearable tremor suppression devices.
['Yue Zhou', 'Mary E. Jenkins', 'Michael D. Naish', 'Ana Luisa Trejos']
Design and validation of a high-order weighted-frequency fourier linear combiner-based Kalman filter for parkinsonian tremor estimation
910,385
Network structures formed by actin filaments are present in many kinds of fluorescence microscopy images. In order to quantify the conformations and dynamics of such actin filaments, we propose a fully automated method to extract actin networks from images and analyze network topology. The method handles well intersecting filaments and, to some extent, overlapping filaments. First we automatically initialize a large number of Stretching Open Active Contours (SOACs) from ridge points detected by searching for plus-to-minus sign changes in the gradient map of the image. These initial SOACs then elongate simultaneously along the bright centerlines of filaments by minimizing an energy function. During their evolution, they may merge or stop growing, thus forming a network that represents the topology of the filament ensemble. We further detect junction points in the network and break the SOACs at junctions to obtain “SOAC segments”. These segments are then re-grouped using a graph-cut spectral clustering method to represent the configuration of actin filaments. The proposed approach is generally applicable to extracting intersecting curvilinear structures in noisy images. We demonstrate its potential using two kinds of data: (1) actin filaments imaged by Total Internal Reflection Fluorescence Microscopy (TIRFM) in vitro; (2) actin cytoskeleton networks in fission yeast imaged by spinning disk confocal microscopy.
['Ting Xu', 'Hongsheng Li', 'Tian Shen', 'Nikola Ojkic', 'Dimitrios Vavylonis', 'Xiaolei Huang']
Extraction and analysis of actin networks based on Open Active Contour models
373,008
In graph modification problems, one is given a graph G and the goal is to apply a minimum number of modification operations (such as edge deletions) to G such that the resulting graph fulfills a certain property. For example, the Cluster Deletion problem asks to delete as few edges as possible such that the resulting graph is a disjoint union of cliques. Graph modification problems appear in numerous applications, including the analysis of biological and social networks. Typically, graph modification problems are NP-hard, making them natural candidates for parameterized complexity studies. We discuss several fruitful interactions between the development of fixed-parameter algorithms and the design of heuristics for graph modification problems, featuring quite different aspects of mutual benefits.
['Christian Komusiewicz', 'André Nichterlein', 'Rolf Niedermeier']
Parameterized Algorithmics for Graph Modification Problems: On Interactions with Heuristics
810,675
We present a promising new framework for improving boosting performance with transductive inference when training an automatic text detector. The resulting detector is fast and efficient, and it exhibits high accuracy on a large test set.
['David Bargeron', 'Paul A. Viola', 'Patrice Y. Simard']
Boosting-based transductive learning for text detection
74,418
In this paper we present a rule-based formalism for the acquisition, representation, and application of the transfer knowledge used in a Japanese-English machine translation system. The transfer knowledge is learnt automatically from a parallel corpus by using structural matching between the parse trees of translation pairs. The user can customize the rule base by simply correcting translation results. We have extended the machine translation system with two user-friendly front ends: an MSWord interface and a Web interface. Since our system is mainly intended as a tool for language students to convey a better understanding of Japanese, we also offer the display of detailed information about lexical, syntactic, and transfer knowledge. The system has been implemented in Amzi! Prolog, using the Amzi! Logic Server Visual Basic Module and the Amzi! Logic Server CGI Interface to develop the front ends.
['Werner Winiwarter']
JETCAT – Japanese-English Translation Using Corpus-Based Acquisition of Transfer Rules
399,590
Indium-gallium-zinc oxide (IGZO) thin-film transistors (TFTs) are simulated using TCAD software. Nonlinearities observed in fabricated devices are obtained through simulation and corresponding physical characteristics are further investigated. For small channel length (below 1 µm) TFTs’ simulations show short channel effects, namely drain-induced barrier lowering (DIBL), and effectively source-channel barrier is shown to decrease with drain bias. Simulations with increasing shallow donor-like states result in transfer characteristics presenting hump-like behavior as typically observed after gate bias stress. Additionally, dual-gate architecture is simulated, exhibiting threshold voltage modulation by the second gate biasing.
['Jorge Martins', 'Pedro Barquinha', 'João Goes']
TCAD Simulation of Amorphous Indium-Gallium-Zinc Oxide Thin-Film Transistors
830,392
The interference channel with a cognitive relay is a variation of the classical two-user interference channel in which a relay aids the transmission among the users. The relay is assumed to have genie-aided cognition: that is it has full, a-priori, knowledge of the messages to be transmitted. We obtain a new outer bound for this channel model and prove capacity for a class of channels in which the transmissions of the two users are non interfering. This capacity result improves on a previous result for the Gaussian case in which the capacity was proved to within a gap of 3 bits/s/Hz.
['Hossein Charmchi', 'Ghosheh Abed Hodtani', 'Masoumeh Nasiri-Kenari']
A New Outer Bound for a Class of Interference Channels with a Cognitive Relay and a Certain Capacity Result
50,256
In this paper, we develop an HMM-based sliding video text recognizer and present our results on Turkish broadcast news for the hearing impaired. We use well known speech recognition techniques to model and recognize sliding video text characters using a minimal amount of labeled data. Baseline system without any language modeling gives a word error rate of 2.2% on 138 minutes of test data. We then provide an analysis of character errors and employ a character-based language model to correct most of them. Finally we decrease the amount of training data to a quarter, split the test data into halves and investigate semi-supervised training. Word error rates after semi-supervised training are significantly lower than to those after baseline training. We see 40% relative reduction in word error rate (1.5 → 0.9) over the test set.
['Tk Som', 'Dogan Can', 'Murat Saraclar']
HMM-based sliding video text recognition for Turkish broadcast news
329,183
Machine vision is a key technology used in an intelligent transportation system (ITS) to augment human drivers' visual capabilities. For the in-car applications, additional motion components are usually induced by disturbances such as the bumpy ride of the vehicle or the steering effect, and they will affect the image interpretation processes that is required by the motion field (motion vector) detection in the image. In this paper, a novel robust in-car digital image stabilization (DIS) technique is proposed to stably remove the unwanted shaking phenomena in the image sequences captured by in-car video cameras without the influence caused by moving object (front vehicles) in the image or intentional motion of the car, etc. In the motion estimation process, the representative point matching (RPM) module combined with the inverse triangle method is used to determine and extract reliable motion vectors in plain images that lack features or contain a large low-contrast area to increase the robustness in different imaging conditions, since most of the images captured by in-car video cameras include large low-contrast sky areas. An adaptive background evaluation model is developed to deal with irregular images that contain large moving objects (front vehicles) or a low-contrast area above the skyline. In the motion compensation processing, a compensating motion vector (CMV) estimation method with an inner feedback-loop integrator is proposed to stably remove the unwanted shaking phenomena in the images without losing the effective area of the images with a constant motion condition. The proposed DIS technique was applied to the on-road captured video sequences with various irregular conditions for performance demonstrations
['Sheng-Che Hsu', 'Sheng-Fu Liang', 'Kang-Wei Fan', 'Chin-Teng Lin']
A Robust In-Car Digital Image Stabilization Technique
692,133
Modeling the Clonal Evolution of Cancer from Next Generation Sequencing Data
['Wei Jiao', 'Shankar Vembu', 'Amit G. Deshwar', 'Lincoln Stein', 'Quaid Morris']
Modeling the Clonal Evolution of Cancer from Next Generation Sequencing Data
629,281
Modelling and simulation permeate all areas of business, science and engineering and increasingly complex simulation systems often require huge computing resources and data sets that are geographically distributed. The widely adopted platform for building distributed simulations is the High Level Architecture (HLA). Deficiencies associated with HLA have been well discussed in the literature. The advent of Grid technology enables the use of distributed computing resources and facilitates the access of geographically distributed data. In this paper, we propose a framework for executing large-scale distributed simulations using Grid services. The framework addresses some of the deficiencies of HLA, including dynamic discovery and resource utilization. End-users can construct large-scale distributed simulations using this framework with ease.
['Wenbo Zong', 'Yong Wang', 'Wentong Cai', 'Stephen John Turner']
Grid Services and Service Discovery for HLA-Based Distributed Simulation
491,838
Service Oriented Architecture: Overview and Directions.
['Boualem Benatallah', 'Hamid R. Motahari Nezhad']
Service Oriented Architecture: Overview and Directions.
775,640
We analyze the effect of Subscriber-end timing recovery Circuit jitter on the performance of two types of adaptive echo cancellers that can be used for full-duplex digital transmission on two-wire subscriber loops. Under severe echo-to-far-end signal ratios, echo canceller performance is found to be quite sensitive to high-frequency jitter components. Satisfactory performance with respect to jitter requires that a narrow-band phase-locked loop, rather than a single-tuned high- Q filter, be employed for timing recovery.
['David D. Falconer']
Timing Jitter Effects on Digital Subscriber Loop Echo Cancellers: Part I--Analysis of the Effect
398,648
The organization of the CLEF 2007 evaluation campaign is described and details are provided concerning the tracks, test collections, evaluation infrastructure, and participation. The main results are commented and future evolutions in the organization of CLEF are discussed.
['Carol Peters']
What Happened in CLEF 2007
816,616
A new integrated SAR signal processor, geo-coding and interferometric processing commercial software (GeoWatch software) has been developed. It supports high speed and large data processing based on ERS-1&2, ENVISAT, ALOS PALSAR, JERS-1, RADARSAT-1&2, TerraSAT and CSK SAR data, with user friendly GUI and parallel processing on the latest 64 bit Windows and UNIX/Linux systems and both personal computers and powerful multi-core CPU+GPU cluster platforms. It has been applied to the DSM estimation in a mountain area and subsidence estimation at an airport using its application-oriented step-by-step pipeline processing, and the results showed the following key features: 1. Deformation monitoring coverage expansion from coherent points to distributed scatters; 2. Deformation monitoring period reduction and accuracy improving by multi-track SAR interferometric processing; 3. Truly ortho-rectified interferogram, image, digital surface model (DSM) and deformation map production.
['Aiguo Zhao', 'Hui Lin', 'Huadong Guo', 'Jinsong Chen', 'Liming Jiang']
Subsidence and DSM estimation using GeoWatch software
209,259
We present a multi-user multiple-input multiple-output (MU-MIMO) precoding scheme utilizing generalized singular value decomposition (GSVD). Our work is motivated by the precoding scheme developed by Sadek, Tarighat, and Sayed that maximizes the signal to leakage and noise ratio (SLNR), in which the precoding weight is obtained by the generalized eigenvalue decomposition (GEVD). However, the covariance matrix utilized in GEVD becomes close to be singular as the signal to noise ratio goes high. To improve the numerical accuracy, a GSVD based algorithm is exploited in this paper and a novel method is derived to compute the precoding weights for multiple users by removing redundant computational loads rather than repeating the GSVD algorithm for each user. In addition, we propose a technique to speed up the GSVD based precoding algorithm by preprocessing using the QR decomposition. Finally, to improve the system performance, the antenna selection scheme using GSVD is also proposed which does not require the exhaustive search to choose the active antennas.
['Jaehyun Park', 'Joohwan Chun', 'Haesun Park']
Efficient GSVD Based Multi-User MIMO Linear Precoding and Antenna Selection Scheme
404,856
We define a class of “algebraic” random matrices. These are random matrices for which the Stieltjes transform of the limiting eigenvalue distribution function is algebraic, i.e., it satisfies a (bivariate) polynomial equation. The Wigner and Wishart matrices whose limiting eigenvalue distributions are given by the semicircle law and the Marcenko–Pastur law are special cases.#R##N##R##N#Algebraicity of a random matrix sequence is shown to act as a certificate of the computability of the limiting eigenvalue density function. The limiting moments of algebraic random matrix sequences, when they exist, are shown to satisfy a finite depth linear recursion so that they may often be efficiently enumerated in closed form.#R##N##R##N#In this article, we develop the mathematics of the polynomial method which allows us to describe the class of algebraic matrices by its generators and map the constructive approach we employ when proving algebraicity into a software implementation that is available for download in the form of the RMTool random matrix “calculator” package. Our characterization of the closure of algebraic probability distributions under free additive and multiplicative convolution operations allows us to simultaneously establish a framework for computational (noncommutative) “free probability” theory. We hope that the tools developed allow researchers to finally harness the power of infinite random matrix theory.
['N. Raj Rao', 'Alan Edelman']
The Polynomial Method for Random Matrices
118,159
Due to the increasing algorithmic complexity of todays embedded systems, the consideration of extra-functional properties becomes even more important. Extra-functional properties such as timing, power consumption, and temperature need to be validated against given requirements on all abstraction levels. For timing and power consumption at RT- and gate-level, several techniques are available, but there is still a lack of methods and tools for power estimation and analyses at electronic system level (ESL) and above. Existing ESL methods in most cases use state-based methods for power simulation. This may lead to inaccurate results, especially for data-dependent designs. In this paper, we extend the Power State Machine (PSM) model for black-box RTL IP components with a mechanism that employs data-dependent switching activity using the Hamming distance (HD). In pipelined designs, we do not only consider the input HD but also the HDs of the internal pipeline stage registers. Since these registers of black-box IP are not observable from the outside, our model derives the internal HDs from previous input data. The results show that our extension achieves up to 38% better results than the previous PSM approach and up to 35% better results compared to a model considering only the input HD.
['Daniel Lorenz', 'Kim Gruettner', 'Wolfgang Nebel']
Data-and State-Dependent Power Characterisation and Simulation of Black-Box RTL IP Components at System Level
464,746
Improving Model Counting by Leveraging Definability.
['Jean-Marie Lagniez', 'Emmanuel Lonca', 'Pierre Marquis']
Improving Model Counting by Leveraging Definability.
988,782
Computing with Words (CWW) aims to investigate the possibility of imitating the unique ability of humans for approximate reasoning on the basis of approximately defined classes and concepts in the form of words. Type-2 fuzzy sets have been used to provide an adequate modeling basis for words in a fuzzy logic context. In the context of type-2 fuzzy sets employed as part of CWW, a variety of research efforts have been made to investigate approaches to model the meaning of specific words using type-2 fuzzy sets. In this paper we start by focusing on the interpretation of classical set-theoretical operations (complement, union and intersection) for crisp and type-1 fuzzy sets. We proceed by extending the interpretations to the results of the union and intersection operations of interval type-2 fuzzy sets, specifically indicating their effect on the uncertainty representation in the sets. We note the impact of the choice of t-norms and t-conorms in particular in the context of CWW applications where the interpretation of the resulting sets and its resemblance to the human intuitive meaning of the concept or word is essential. Finally, we provide the interpretation and reasoning behind the Multi Level Agreement (MLA) operation based on zSlices which was previously introduced and discuss the requirement for the selection of the right operations for the amalgamation of individual fuzzy sets and the potential for investigating this choice in particular in a CWW context.
['Christian Wagner', 'Hani Hagras']
Interpreting fuzzy set operations and Multi Level Agreement in a Computing with Words context
330,196
Complex moment-based eigensolvers for solving interior eigenvalue problems have been studied because of their high parallel efficiency. Recently, we proposed the block Arnoldi-type complex moment-based eigensolver without a low-rank approximation. A low-rank approximation plays a very important role in reducing computational cost and stabilizing accuracy in complex moment-based eigensolvers. In this paper, we develop the method and propose block Krylov-type complex moment-based eigensolvers with a low-rank approximation. Numerical experiments indicate that the proposed methods have higher performance than the block SS–RR method, which is one of the most typical complex moment-based eigensolvers.
['Akira Imakura', 'Tetsuya Sakurai']
Block Krylov-type complex moment-based eigensolvers for solving generalized eigenvalue problems
947,666
The method of the capacitive coupled transmission line pulsing (CC-TLP) is applied to a product IC at package level and for the first time at wafer level. The investigated product showed a field failure which could be reproduced by the CDM. The application of the CC-TLP to the product at package and wafer level also reproduced the field failure. Furthermore the measured failure currents correlate very well with the failure currents under CDM conditions.
['Heinrich Wolf', 'H. Gieser', 'Dirk Walter']
Investigating the CDM susceptibility of IC's at package and wafer level by capacitive coupled TLP.
420,975
This study develops a novel vehicle stability control (VSC) scheme using adaptive neural network sliding mode control technique for Steer-by-Wire (SbW) equipped vehicles. The VSC scheme is designed in two stages, i.e., the upper and lower level control stages. An adaptive sliding mode yaw rate controller is first proposed as the upper one to design the compensated steering angle for enabling the actual yaw rate to closely follow the desired one. Then, in the implementation of the yaw control system, the developed steering controller consists of a nominal control and a terminal sliding mode compensator where a radial basis function neural network (RBFNN) is adopted to adaptively learn the uncertainty bound in the Lyapunov sense such that the actual front wheel steering angle can be driven to track the commanded angle in a finite time. The proposed novel stability control scheme possesses the following prominent superiorities over the existing ones: (i) No prior parameter information on the vehicle and tyre dynamics is required in stability control, which greatly reduces the complexity of the stability control structure. (ii) The robust stability control performance against parameter variations and road disturbances is obtained by means of ensuring the good tracking performance of yaw rate and steering angle and the strong robustness with respect to large and nonlinear system uncertainties. Simulation results are demonstrated to verify the superior control performance of the proposed VSC scheme.
['Hai Wang', 'Ping He', 'Ming Yu', 'Linfeng Liu', 'Manh Tuan Do', 'Huifang Kong', 'Zhihong Man']
Adaptive neural network sliding mode control for steer-by-wire-based vehicle stability control
728,229
A conventional automatic speech recognizer does not perform well in the presence of noise, while human listeners are able to segregate and recognize speech in noisy conditions. We study a novel feature based on an auditory periphery model for robust speech recognition. Specifically, gammatone frequency cepstral coefficients are derived by applying a cepstral analysis on gammatone filterbank responses. Our evaluations show that the proposed feature performs considerably better than conventional acoustic features. We further demonstrate that integrating the proposed feature with a computational auditory scene analysis system yields promising recognition performance.
['Yang Shao', 'Zhaozhang Jin', 'DeLiang Wang', 'Soundararajan Srinivasan']
An auditory-based feature for robust speech recognition
432,456
Urban areas develop on formal and informal levels. Informal development is often highly dynamic, leading to a lag of spatial information about urban structure types. In this work, an object-based remote sensing approach will be presented to map the migrant housing urban structure type in the Pearl River Delta, China. SPOT5 data were utilized for the classification (auxiliary data, particularly up-to-date cadastral data, were not available). A hierarchically structured classification process was used to create (spectral) independence from single satellite scenes and to arrive at a transferrable classification process. Using the presented classification approach, an overall classification accuracy of migrant housing of 68.0% is attained.
["Sebastian d'Oleire-Oltmanns", 'Bodo Coenradie', 'Birgit Kleinschmit']
An Object-Based Classification Approach for Mapping Migrant Housing in the Mega-Urban Area of the Pearl River Delta (China)
187,558
We develop and evaluate a semiparametric method to estimate the mean-value function of a nonhomogeneous Poisson process (NHPP) using one or more process realizations observed over a fixed time interval. To approximate the mean-value function, the method exploits a specially formulated polynomial that is constrained in least-squares estimation to be nondecreasing so the corresponding rate function is nonnegative and smooth (continuously differentiable). An experimental performance evaluation for two typical test problems demonstrates the method?s ability to yield an accurate fit to an NHPP based on a single process realization. A third test problem shows how the method can estimate an NHPP based on multiple realizations of the process.
['Michael E. Kuhl', 'Shalaka C. Deo', 'James R. Wilson']
Smooth flexible models of nonhomogeneous poisson processes using one or more process realizations
343,468
This paper presents a novel unified hierarchical structure for scalable edit propagation. Our method is based on the key observation that in edit propagation, appearance varies very smoothly in those regions where the appearance is different from the user-specified pixels. Uniformly sampling in these regions leads to redundant computation. We propose to use a quadtree-based adaptive subdivision method such that more samples are selected in similar regions and less in those that are different from the user-specified regions. As a result, both the computation and the memory requirement are significantly reduced. In edit propagation, an edge-preserving propagation function is first built, and the full solution for all the pixels can be computed by interpolating from the solution obtained from the adaptively subdivided domain. Furthermore, our approach can be easily extended to accelerate video edit propagation using an adaptive octree structure. In order to improve user interaction, we introduce several new Gaussian Mixture Model (GMM) brushes to find pixels that are similar to the user-specified regions. Compared with previous methods, our approach requires significantly less time and memory, while achieving visually same results. Experimental results demonstrate the efficiency and effectiveness of our approach on high-resolution photographs and videos.
['Chunxia Xiao', 'Yongwei Nie', 'Feng Tang']
Efficient Edit Propagation Using Hierarchical Data Structure
446,986
A graph is called almost self-complementary if it is isomorphic to the graph obtained from its complement by removing a 1-factor. In this paper, a complete classification is given of edge-transitive almost self-complementary graphs. This is then used to answer some open questions reported in the literature.
['Jin-Xin Zhou']
Edge-transitive almost self-complementary graphs
967,873
Self-adjusting computation enables writing programs that can automatically and efficiently respond to changes to their data (e.g., inputs). The idea behind the approach is to store all data that can change over time in modifiable references and to let computations construct traces that can drive change propagation. After changes have occurred, change propagation updates the result of the computation by re-evaluating only those expressions that depend on the changed data. Previous approaches to self-adjusting computation require that modifiable references be written at most once during execution---this makes the model applicable only in a purely functional setting. In this paper, we present techniques for imperative self-adjusting computation where modifiable references can be written multiple times. We define a language SAIL (Self-Adjusting Imperative Language) and prove consistency, i.e., that change propagation and from-scratch execution are observationally equivalent. Since SAIL programs are imperative, they can create cyclic data structures. To prove equivalence in the presence of cycles in the store, we formulate and use an untyped, step-indexed logical relation, where step indices are used to ensure well-foundedness. We show that SAIL accepts an asymptotically efficient implementation by presenting algorithms and data structures for its implementation. When the number of operations (reads and writes) per modifiable is bounded by a constant, we show that change propagation becomes as efficient as in the non-imperative case. The general case incurs a slowdown that is logarithmic in the maximum number of such operations. We describe a prototype implementation of SAIL as a Standard ML library.
['Umut A. Acar', 'Amal Ahmed', 'Matthias Blume']
Imperative self-adjusting computation
459,298
Data structures that represent static unlabeled trees and planar graphs are developed. The structures are more space efficient than conventional pointer-based representations, but (to within a constant factor) they are just as time efficient for traversal operations. For trees, the data structures described are asymptotically optimal: there is no other structure that encodes n-node trees with fewer bits per node, as N grows without bound. For planar graphs (and for all graphs of bounded page number), the data structure described uses linear space: it is within a constant factor of the most succinct representation. >
['Guy Jacobson']
Space-efficient static trees and graphs
314,070
Our system consists of a concept-based retrieval subsystem which performs the baseline blog distillation, an opinion identification subsystem and an opinion-in-depth analysis subsystem which performs the faceted blog distillation task. In the baseline task, documents which are deemed relevant are retrieved by the retrieval system with respect to the query, without taking into consideration of any facet requirements. The feeds are ranked in descending order of the sum of the relevance scores of retrieved documents. In order to improve the recall of the retrieval subsystem, we recognize proper nouns or dictionary phrases without requiring matching all the words of the phrases. In the opinionated vs. factual and personal vs. official faceted tasks, the opinion identification subsystem is employed to recognize query-relevant opinions within the documents. Personal documents are more likely to be opinionated than official documents. In the in-depth vs. shallow faceted task, the depth of the opinion within a document is measured by the number of concepts which are related with the query the document contains.
['Lifeng Jia', 'Clement T. Yu']
UIC at TREC 2010 Faceted Blog Distillation
178,848
This paper presents a method and algorithms for automatic modeling of anatomical joint motion. The method relies on collision detection to achieve stable positions and orientations of the knee joint by evaluating the relative motion of the tibia with respect to the femur (for example, flexion-extension). The stable positions then become the basis for a look-up table employed in the animation of the joint. The strength of this method lies in its robustness to animate any normal anatomical joint. It is also expandable to other anatomical joints given a set of kinematic constraints for the joint type as well as a high-resolution, static, 3-D model of the joint. The demonstration could be patient specific if a person's real anatomical data could be obtained from a medical imaging modality such as computed tomography or magnetic resonance imaging. Otherwise, the demonstration requires the scaling of a generic joint based on patient characteristics. Compared with current teaching strategies, this Virtual Reality Dynamic Anatomy (VRDA) tool aims to greatly enhance students' understanding of 3-D human anatomy and joint motions. A preliminary demonstration of the optical superimposition of a generic knee joint on a leg model is shown.
['Yohan Baillot', 'Jannick P. Rolland', 'Kuo-Chi Lin', 'Donna Wright']
Automatic Modeling of Knee-Joint Motion For The Virtual Reality Dynamic Anatomy (VRDA) Tool
102,425
Functional and Ecosystem Requirements to Design Sustainable Product-Service.
['Margherita Peruzzini', 'Eugenia Marilungo', 'Michele Germani']
Functional and Ecosystem Requirements to Design Sustainable Product-Service.
775,411
Propose fast algorithms for sparse linear prediction.Usage of N log N algorithms for repeated solve of symmetric Toeplitz systems.Can handle even quite large dimensions and high-sampling rate.The fast algorithms shows possibilities for implementation in real-time systems.Experiments shows that high and low accuracy solutions performs almost equally well. In speech processing applications, imposing sparsity constraints on high-order linear prediction coefficients and prediction residuals has proven successful in overcoming some of the limitation of conventional linear predictive modeling. However, this modeling scheme, named sparse linear prediction, is generally formulated as a linear programming problem that comes at the expenses of a much higher computational burden compared to the conventional approach. In this paper, we propose to solve the optimization problem by combining splitting methods with two approaches: the Douglas-Rachford method and the alternating direction method of multipliers. These methods allow to obtain solutions with a higher computational efficiency, orders of magnitude faster than with general purpose software based on interior-point methods. Furthermore, computational savings are achieved by solving the sparse linear prediction problem with lower accuracy than in previous work. In the experimental analysis, we clearly show that a solution with lower accuracy can achieve approximately the same performance as a high accuracy solution both objectively, in terms of prediction gain, as well as with perceptually relevant measures, when evaluated in a speech reconstruction application.
['Tobias Lindstrøm Jensen', 'Daniele Giacobello', 'Toon van Waterschoot', 'Mads Græsbøll Christensen']
Fast algorithms for high-order sparse linear prediction with applications to speech processing
291,674
The increasing complexity of systems-on-a-chip with the accompanied increase in their test data size has made the need for test data reduction imperative. In this paper we introduce a novel and very efficient lossless compression technique for testing systems-on-a-chip based on geometric shapes. The technique exploits reordering of test vectors to minimize the number of shapes needed to encode the test data. The effectiveness of the technique in achieving high compression ratio is demonstrated on the largest ISCAS85 and full-scanned versions of ISCAS89 benchmark circuits. In this paper, it is assumed that an embedded core will be used to execute the decompression algorithm and decompress the test data.
['Aiman H. El-Maleh', 'S. al Zahir', 'Esam Khan']
A geometric-primitives-based compression scheme for testing systems-on-a-chip
156,935