abstract
stringlengths
0
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
353
__index_level_0__
int64
3
1,000k
['Peter Potash', 'William Boag', 'Alexey Romanov', 'Vasili Ramanishka', 'Anna Rumshisky']
SimiHawk at SemEval-2016 Task 1: A Deep Ensemble System for Semantic Textual Similarity.
835,142
There are varieties of methods for realizing augmented reality. A projection type augmented reality proposes augmented information in the real world. Every person can share the experience within the environment. However, the method needs many preparation works and enormous energy for freely projecting images. It can work on only limited conditions, such as indoors or outdoors at night. Our research solves these problems. We propose a method that works in a head-worn type equipment using the method of projection type augmented reality. It recognizes an object from camera images. It projects a mark that carries a little information onto a recognized object. There is an error on the position where the mark projected when the equipment moves. We decrease this error using precise measurements of angular velocity. We propose the method, the implementation and the experiments for evaluating the performance in vitro and in vivo.
['Kyota Aoki', 'Naoki Aoyagi']
Mark Projection on Real World with Precise Measurement of Angular Velocity for Helping Picking Works
847,452
Sufficient work has been done to demonstrate that software reliability models can be used to monitor reliability growth over a useful range of software development projects. However, due to the lack of appropriate tools, the application of software reliability models as a means for project management is not as widespread as it might be. The existing software reliability modeling and measurement programs are either difficult for a nonspecialist to use, or short of a systematic and comprehensive capability in the software reliability measurement practice. To address the ease-of-use and the capability issues, the authors have prototyped a software reliability modeling tool called CASRE, a Computer-Aided Software Reliability Estimation tool. Implemented with a systematic and comprehensive procedure as its framework, CASRE will encourage more widespread use of software reliability modeling and measurement as a routine practice for software project management. The authors explore the CASRE tool to demonstrate its functionality and capability.
['Michael R. Lyu', 'Allen P. Nikora', 'William H. Farr']
A systematic and comprehensive tool for software reliability modeling and measurement
139,501
Dense wavelength division multiplexing optical networks use tunable devices such as distributed Bragg reflector laser diodes. These optical sources require a precise wavelength calibration according to the ITU grid, even with aged components. Some specific optical spectrum analyzers are commercially available. Unfortunately, measurements using those systems are generally relatively slow. We present and discuss in this paper a fast spectral measurement system that can easily be implemented in a laser diode package.
['Juan Manuel Campos', 'Alain Destrez', 'Joel Jacquet', 'Zeno Toffano']
Ultra-fast optical spectrum analyzer for DWDM applications
431,502
['Paulo Veríssimo', 'Nuno Ferreira Neves', 'Miguel Correia', 'Yves Deswarte', 'Anas Abou El Kalam', 'Andrea Bondavalli', 'Alessandro Daidone']
The CRUTIAL Architecture for Critical Information Infrastructures.
752,174
In this paper, we study spectral properties of the so-called monotone systems and link these results with the celebrated Perron-Frobenius theorem for linear positive systems. Using these spectral properties we study the geometry of basins of attraction of monotone systems. Additionally, we show that under certain conditions we can bound the variations in these basins under parametric uncertainty in the vector field. We also provide a computational algorithm to estimate the basins of attraction and illustrate the results on two and three state monotone systems.
['Aivar Sootla', 'Alexandre Mauroy']
Estimation of Isostables and Basins of Attraction of Monotone Systems
599,527
A fast channel prediction scheme is proposed for Ricean fading scenarios. By employing an echo state network (ESN), the scheme is able to obtain smaller prediction error than previous designs. Simulation results show that the ESN prediction method has lower normalized mean squared error than the traditional autoregressive, discrete wavelet transform, and support vector machine prediction approaches. The symbol error rate gap between the perfect and predicted channel state information is small.
['Yisheng Zhao', 'Hui Gao', 'Norman C. Beaulieu', 'Zhonghui Chen', 'Hong Ji']
Echo State Network for Fast Channel Prediction in Ricean Fading Scenarios
945,195
The increased computational needs in many sectors place huge demands on cloud computing. Power consumption and resource pool capacity are two of the challenges faced by the next generation of high performance computing (HPC). This paper aims at minimising the computing-energy consumption in decentralised multi-cloud systems using Dynamic Voltage and Frequency Scaling (DVFS) when scheduling dependent HPC tasks under deadline constraints. We propose an energy-aware scheduling algorithm EAGS. To demonstrate the efficiency of our algorithm EAGS, we compare it with the Cloud min-min Scheduling (CMMS) algorithm in different experiments. The simulation results show that our algorithm can produce energy consumption lower than CMMS by an average of 63.9%.
['Aeshah Alsughayyir', 'Thomas Erlebach']
Energy Aware Scheduling of HPC Tasks in Decentralised Cloud Systems
695,099
Based on the minimal reduction strategy, Yang et al. (2011) developed a fixed-sum output data envelopment analysis (FSODEA) approach to evaluate the performance of decision-making units (DMUs) with fixed-sum outputs. However, in terms of such a strategy, all DMUs compete over fixed-sum outputs with “no memory” that will result in differing efficient frontiers’ evaluations. To address the problem, in this study, we propose an equilibrium efficiency frontier data envelopment analysis (EEFDEA) approach, by which all DMUs with fixed-sum outputs can be evaluated based on a common platform (or equilibrium efficient frontier). The proposed approach can be divided into two stages. Stage 1 constructs a common evaluation platform via two strategies: an extended minimal adjustment strategy and an equilibrium competition strategy. The former ensures that original efficient DMUs are still efficient, guaranteeing the existence of a common evaluation platform. The latter makes all DMUs achieve a common equilibrium efficient frontier. Then, based on the common equilibrium efficient frontier, Stage 2 evaluates all DMUs with their original inputs and outputs. Finally, we illustrate the proposed approach by using two numerical examples.
['Min Yang', 'Yongjun Li', 'Y. Chen', 'Liang Liang']
An equilibrium efficiency frontier data envelopment analysis approach for evaluating decision-making units with fixed-sum outputs
3,913
There is a great variety of theoretical models of emotions and implementation technologies which can be used in the design of affective computers. Consequently, designers and researchers usually made practical choices of models and develop ad-hoc solutions that sometimes lack flexibility. In this paper we introduce a generic approach to modeling emotional cues. The main component of our approach is the ontology of emotional cues. The concepts in the ontology are grouped into three global modules representing three layers of emotions’ detection or production: the emotion module, the emotional cue module, and the media module. The emotion module defines emotions as represented with emotional cues. The emotional cue module describes external emotional representations in terms of media properties. The media module describes basic media properties important for emotional cues. Proposed ontology enables flexible description of emotional cues at different levels of abstraction. This approach could serve as a guide for the flexible design of affective devices independently of the starting model and the final way of implementation.
['Zeljko Obrenovic', 'Nestor Garay', 'Juan Miguel López', 'Inmaculada Fajardo', 'Idoia Cearreta']
An ontology for description of emotional cues
376,364
This paper documents the requirements on tracking technology for spatially interactive sonic arts. We do this by comparing our theorised notion of an ideal kinaesthetic interface to, firstly, the current results of an ongoing online survey and, secondly, the results of our ongoing Workshop on Music, Space & Interaction (MS&I). In MS&I we research the affordances of existing and hypothetical technology to enhance and facilitate spatial interactivity. We give both qualitative and quantitative recommendations for design. While underlining the specific requirements for sonic art in respect to its aural nature, we discuss how and why the requirements elicited from our research can be applied to spatial interactivity in general.
['Dominik Schlienger']
Requirements on Kinaesthetic Interfaces for Spatially Interactive Sonic Art
933,166
['Qiang Zhang', 'Nan Wang', 'Dongsheng Zhou', 'Xiaopeng Wei']
A Triangulation Method for Unorganized Points Cloud Based on Ball Expanding
763,526
An experimental technique using the chemical reaction between acetic acid (CH3COOH) and cyclohexylamine (C6H13N) generated smoke to visualise wake flow from a moving object. A 1/5th scaled manikin was dabbed with cyclohexylamine on specific locations and entered an acetic acid saturated chamber. Smoke was generated via the chemical reaction as the manikin moved through the chamber. High-speed photography and image processing techniques were used to determine whether qualitative and quantitative data could be produced for (1) better understanding the effects of trailing wakes on particle exposure induced by human movement and (2) validation data for computational fluid dynamic (CFD) modelling results. Image analysis showed three phases of manikin movement: peak velocity, deceleration, and stationary. Detailed flow separation images showed that regular vortices were produced at the left shoulder, while flow separating at the hand swirled behind and inwards. Analysis of flow over the head revealed how the separation point shifted from the back of the head to the front as the velocity decreased. The results demonstrated that the experimental method was feasible in producing meaningful results for wake flow phenomena behind a manikin and validation data for CFD simulations. Graphical abstract: [Figure not available: see fulltext.]
['Kiao Inthavong', 'Yao Tao', 'Phred Petersen', 'Krishna Mohanarangam', 'William Yang', 'Jiyuan Tu']
A smoke visualisation technique for wake flow from a moving human manikin
880,714
['Richard Müller', 'P. Kovacs', 'Jan Schilbach', 'Ulrich W. Eisenecker', 'Dirk Zeckzer', 'Gerik Scheuermann']
A structured approach for conducting a series of controlled experiments in software visualization
941,302
Distributed computations on graphs are becoming increasingly important with the emergence of large graphs such as social networks and the Web that contain huge amounts of useful information. Computations can be easily distributed with the use of specialised frameworks like Hadoop with MapReduce, Giraph/Pregel or GraphLab. Yet, declarative, query-like, but at the same time efficient solutions are lacking. Programmers are needed to code all computations by hand and manually optimise each individual program. This paper presents an implementation of a tool which extends a distributed computations platform, Apache Spark, with the capability of executing queries written in a variant of a declarative query language, Datalog, especially extended to better support graph algorithms. This approach makes it possible to express graph algorithms in a declarative query language, accessible to a broader group of users than typical programming languages, and execute them on an existing infrastructure for distributed computations.
['Marek Rogala', 'Jan Hidders', 'Jacek Sroka']
DatalogRA: datalog with recursive aggregation in the spark RDD model
852,876
An important problem when moving an application to the cloud consists in selecting the most suitable cloud plan (among those available from cloud providers) for the application deployment, with the goal of finding the best match between application requirements and plan characteristics. If a user wishes to move multiple applications at the same time, this task can be complicated by the fact that different applications might have different (and possibly contrasting) requirements. In this paper, we propose an approach enabling users to select a cloud plan that best balances the satisfaction of the requirements of multiple applications. Our solution operates by first ranking the available plans for each application (matching plan characteristics and application requirements) and then by selecting, through a consensus-based process, the one that is considered more acceptable by all applications.
['Ala Arman', 'Sara Foresti', 'Giovanni Livraga', 'Pierangela Samarati']
A consensus-based approach for selecting cloud plans
942,102
While search engines sometimes return different documents containing contradictory answers, little is known about how users handle inconsistent information. This paper investigates the effect of search expertise (defined as specialized knowledge on the internal workings of search engines) on search behavior and satisfaction criteria of users. We selected four tasks comprising factoid questions with inconsistent answers, extracted answers that 30 study participants had found in these tasks, and analyzed their answer-finding behavior in terms of the presence or absence of search expertise. Our main findings are as follows: (1) finding inconsistent answers causes users with search expertise (search experts) to feel dissatisfied, while effort in searching for answers is the dominant factor in task satisfaction for those without search expertise (search non-experts); (2) search experts tend to spend longer completing tasks than search non-experts even after finding possible answers; and (3) search experts narrow down the scope of searches to promising answers as time passes as opposed to search non-experts, who search for any answers even in the closing stage of task sessions. These findings suggest that search non-experts tend to be less concerned about the consistency in their found answers, on the basis of which we discuss the design implications for making search non-experts aware of the existence of inconsistent answers and helping them to search for supporting evidence for answers.
['Kazutoshi Umemoto', 'Takehiro Yamamoto', 'Katsumi Tanaka']
How do users handle inconsistent information?: the effect of search expertise
843,945
A new gradient-based eigenstructure algorithm to locate and track the azimuth and elevation angles of unknown sources in a 3-dimensional environment is proposed and investigated. Starting from initial estimates of the source locations from, say, a coarse search of the multiple signal classification (MUSIC) spectrum, the algorithm obtains better estimates and tracks sources in a repetitive cyclical manner. To refine a particular source location estimate: (1) a preprocessor is designed to remove the effects of the other sources; (2) the eigenvector for the largest eigenvalue of the covariance matrix after preprocessing is found and its difference from the ideal value is determined; and finally (3) a gradient calculation is used to obtain an estimate for the difference of the assumed source location from the actual position. The advantages of using the proposed technique rather than performing a thorough MUSIC search in the 2-dimensional spectrum are: (1) the implementation complexity can be reduced considerably, and (2) the algorithm is well suited to be employed for continuous adaptive filtering purposes in a tracking scenario.
['Chi Chung Ko']
Source tracking with a gradient-based eigenstructure algorithm
142,360
Dealing with spam is very costly, and many organizations have tried to reduce spam-related costs by installing spam filters. Relying on modern econometric methods to reduce the selection bias of installing a spam filter, we use a unique data setting implemented at a German university to measure the costs associated with spam and the costs savings of spam filters. Our methodological framework accounts for effect heterogeneity and can be easily used to estimate the effect of other IS technologies implemented in organizations.#R##N##R##N#The majority of costs stem from the time that employees spend identifying and deleting spam, amounting to an average of approximately five minutes per employee per day. Our analysis, which accounts for selection bias, finds that the installation of a spam filter reduces these costs by roughly one third. Failing to account for the selection bias would lead to a result that suggests that installing a spam filter does not reduce working time losses.#R##N##R##N#However, cost savings only occur when the spam burden is high, indicating that spam filters do not necessarily reduce costs and are therefore no universal remedy. The analysis further shows that spam filters alone are a countermeasure against spam that exhibits only limited effectiveness because they only reduce costs by one third.
['Marco Caliendo', 'Michel Clement', 'Dominik Papies', 'Sabine Scheel-Kopeinig']
Research Note---The Cost Impact of Spam Filters: Measuring the Effect of Information System Technologies in Organizations
525,556
Wire electrical discharge machining (WEDM) is a non-traditional machining process, which is used for machining of difficult to machine materials, like composites and inter metallic materials. In the present paper, an attempt is made to machine hypereutectic Al-Si alloys using WEDM as these materials are widely used in automotive, aerospace and electronic fields because of its attractive properties. In the study, the WEDM machining parameters, such as pulse on time, pulse off time, wire feed rate and variation of percentage of silicon are taken as controlling factors for experimentation. In the present manuscript, an attempt is made to study the influence of percentage of silicon in the alloy system on the performance measures, such as material removal rate (MRR) and surface roughness (SR) of machining. Further, the influence of various input process parameters on the responses has also been studied. In order to optimise the said performance characteristics, two multi-objective optimisation methodologies n...
['D. Mani Babu', 'S. Venkat Kiran', 'Pandu Ranga Vundavilli', 'Animesh Mandal']
Experimental investigations and multi-response optimisation of wire electric discharge machining of hypereutectic Al-Si alloys
898,492
Motivation: The importance of RNA sequence analysis has been increasing since the discovery of various types of non-coding RNAs transcribed in animal cells. Conventional RNA sequence analyses have mainly focused on structured regions, which are stabilized by the stacking energies acting on adjacent base pairs. On the other hand, recent findings regarding the mechanisms of small interfering RNAs (siRNAs) and transcription regulation by microRNAs (miRNAs) indicate the importance of analyzing accessible regions where no base pairs exist. So far, relatively few studies have investigated the nature of such regions.#R##N##R##N#Results: We have conducted a detailed investigation of accessibilities around the target sites of siRNAs and miRNAs. We have exhaustively calculated the correlations between the accessibilities around the target sites and the repression levels of the corresponding mRNAs. We have computed the accessibilities with an originally developed software package, called ‘Raccess’, which computes the accessibility of all the segments of a fixed length for a given RNA sequence when the maximal distance between base pairs is limited to a fixed size W. We show that the computed accessibilities are relatively insensitive to the choice of the maximal span W. We have found that the efficacy of siRNAs depends strongly on the accessibility of the very 3′-end of their binding sites, which might reflect a target site recognition mechanism in the RNA-induced silencing complex. We also show that the efficacy of miRNAs has a similar dependence on the accessibilities, but some miRNAs also show positive correlations between the efficacy and the accessibilities in broad regions downstream of their putative binding sites, which might imply that the downstream regions of the target sites are bound by other proteins that allow the miRNAs to implement their functions. We have also investigated the off-target effects of an siRNA as a potential RNAi therapeutic. We show that the off-target effects of the siRNA have similar correlations to the miRNA repression, indicating that they are caused by the same mechanism.#R##N##R##N#Availability: The C++ source code of the Raccess software is available at http://www.ncrna.org/software/Raccess/ The microarray data on the measurements of the siRNA off-target effects are also available at the same site.#R##N##R##N#Contact: kiryu-h@k.u-tokyo.ac.jp#R##N##R##N#Supplementary information:Supplementary data are available at Bioinformatics online.
['Hisanori Kiryu', 'Goro Terai', 'Osamu Imamura', 'Hiroyuki Yoneyama', 'Kenji Suzuki', 'Kiyoshi Asai']
A detailed investigation of accessibilities around target sites of siRNAs and miRNAs
474,707
We present a methodology for the automatic identification and delineation of germ-layer components in H&E stained images of teratomas derived from human and nonhuman primate embryonic stem cells. A knowledge and understanding of the biology of these cells may lead to advances in tissue regeneration and repair, the treatment of genetic and developmental syndromes, and drug testing and discovery. As a teratoma is a chaotic organization of tissues derived from the three primary embryonic germ layers, H&E teratoma images often present multiple tissues, each of having complex and unpredictable positions, shapes, and appearance with respect to each individual tissue as well as with respect to other tissues. While visual identification of these tissues is time-consuming, it is surprisingly accurate, indicating that there exist enough visual cues to accomplish the task. We propose automatic identification and delineation of these tissues by mimicking these visual cues. We use pixel-based classification, resulting in an encouraging range of classification accuracies from 74.9% to 93.2% for 2- to 5-tissue classification experiments at different scales.
['Ramamurthy Bhagavatula', 'Matthew Fickus', 'W. Kelly', 'Chenlei Guo', 'John A. Ozolek', 'Carlos A. Castro', 'Jelena Kovacevic']
Automatic identification and delineation of germ layer components in H&E stained images of teratomas derived from human and nonhuman primate embryonic stem cells
216,279
Peer-to-peer distributed storage systems provide reliable access to data through redundancy spread over nodes across the Internet. A key goal is to minimize the amount of bandwidth used to maintain that redundancy. Storing a file using an erasure code, in fragments spread across nodes, promises to require less redundancy and hence less maintenance bandwidth than simple replication to provide the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate a new fragment in a distributed way while transferring as little data as possible across the network. In this paper, we introduce a general technique to analyze storage architectures that combine any form of coding and replication, as well as presenting two new schemes for maintaining redundancy using erasure codes. First, we show how to optimally generate MDS fragments directly from existing fragments in the system. Second, we introduce a new scheme called regenerating codes which use slightly larger fragments than MDS but have lower overall bandwidth use. We also show through simulation that in realistic environments, regenerating codes can reduce maintenance bandwidth use by 25% or more compared with the best previous design - a hybrid of replication and erasure codes - while simplifying system architecture.
['Alexandros G. Dimakis', 'Philip Brighten Godfrey', 'Martin J. Wainwright', 'Kannan Ramchandran']
Network Coding for Distributed Storage Systems
11,169
(I) Given a segment-cuttable polygon PP drawn on a planar piece of material QQ, we show how to cut PP out of QQ by a (short) segment saw with a total length of the cuts no more than 2.52.5 times the optimal. We revise the algorithm of Demaine et al. (2001) so as to achieve this ratio.#R##N##R##N#(II) We prove that any collection RR of nn disjoint1 axis-parallel rectangles drawn on a planar piece of material QQ is cuttable by at most 4n4n rays and present an algorithm that runs in O(nlogn)O(nlogn) time for computing a suitable cutting sequence. In particular, the same result holds for cutting with an arbitrary segment saw (of any length).#R##N##R##N#(III) Given a collection PP of segment-cuttable polygons drawn on a planar piece of material such that no two polygons in PP touch each other, PP is always cuttable by a sufficiently short segment saw. We also show that there exist collections of disjoint polygons that are uncuttable by a segment saw.#R##N##R##N#(IV) Given a collection PP of disjoint polygons drawn on a planar piece of material QQ, we present a polynomial-time algorithm that computes a suitable cutting sequence to cut the polygons in PP out of QQ using ray cuts when PP is ray-cuttable and otherwise reports PP as uncuttable.
['Adrian Dumitrescu', 'Anirban Ghosh', 'Masud Hasan']
Cutting out polygon collections with a saw
820,828
With the increasing popularity of using wireless local area networks (WLANs) for Internet access, the controlled channel access mechanism in IEEE 802.11e WLANs, i.e., HCF controlled channel access (HCCA), has received much more attention since its inherent centralized mechanism is more efficient in handling time-bounded multimedia traffic. So far, only a few research studies address the admission control problem of variable bit rate (VBR) traffic over HCCA. These existing studies consider each traffic flow individually and, thus, cannot exploit the statistical multiplexing gain among multiple VBR traffic flows. In this paper, we apply the existing statistical multiplexing framework to the studied admission control problem, with all the features of IEEE 802.11e HCCA being taken into consideration. Experimental results show that our proposed admission control scheme achieves significant improvement in network utilization while still satisfying all the quality-of-service (QoS) requirements.
['Deyun Gao', 'Jianfei Cai', 'Chang Wen Chen']
Admission Control Based on Rate-Variance Envelop for VBR Traffic Over IEEE 802.11e HCCA WLANs
2,105
In this article, we suggest a new algorithm to identify the prior probabilities for classification problem by Bayesian method. The prior probabilities are determined by combining the information of populations in training set and the new observations through fuzzy clustering method (FCM) instead of using uniform distribution or the ratio of sample or Laplace method as the existing ones. We next combine the determined prior probabilities and the estimated likelihood functions to classify the new object. In practice, calculations are performed by Matlab procedures. The proposed algorithm is tested by the three numerical examples including bench mark and real data sets. The results show that the new approach is reasonable and gives more efficient than existing ones.
['Thao Nguyen-Trang', 'Tai Vovan']
A new approach for determining the prior probabilities in the classification problem by Bayesian method
757,142
['Omar Khrouf', 'Kaïs Khrouf', 'Jamel Feki']
OLAP de documents. Modélisation et mise en œuvre
716,649
Dictionary code compression is a technique which has been studied as a method to reduce the energy consumed in the instruction fetch path of processors. Instructions or instruction sequences in the code are replaced with short code words. These code words are later used to index a dictionary which contains the original uncompressed instruction or an entire sequence. In this paper, we present a new method which improves on code density compared to previously published dictionary methods. It uses a two-level dictionary design and is capable of handling compression of both individual instructions and code sequences of 2-16 instructions. The two dictionaries are in separate pipeline stages and work together to decompress sequences and instructions. The impact on storage size for the dictionaries is rather small as the sequences in the dictionary are stored as individually compressed instructions, instead of normal instructions. Compared to previous dictionary code compression methods we achieve improved dynamic compression rate, potential for better performance with reasonable static compression rate and with still small dictionary size suitable for context switching.
['Mikael Collin', 'Mats Brorsson']
Two-Level Dictionary Code Compression: A New Scheme to Improve Instruction Code Density of Embedded Applications
167,649
ABSTRACTNumerically finding stabilising feedback control laws for linear systems of periodic differential equations is a nontrivial task with no known reliable solutions. The most successful method requires solving matrix differential Riccati equations with periodic coefficients. All previously proposed techniques for solving such equations involve numerical integration of unstable differential equations and consequently fail whenever the period is too large or the coefficients vary too much. Here, a new method for numerical computation of stabilising solutions for matrix differential Riccati equations with periodic coefficients is proposed. Our approach does not involve numerical solution of any differential equations. The approximation for a stabilising solution is found in the form of a trigonometric polynomial, matrix coefficients of which are found solving a specially constructed finite-dimensional semidefinite programming (SDP) problem. This problem is obtained using maximality property of the stabi...
['Sergei V. Gusev', 'Anton S. Shiriaev', 'Leonid B. Freidovich']
SDP-based approximation of stabilising solutions for periodic matrix Riccati differential equations
646,371
All-to-all communication is one of the most dense collective communication patterns and occurs in many important applications in parallel and distributed computing. In this paper, we present a new all-to-all broadcast algorithm in multidimensional all-port mesh and torus networks. We propose a broadcast pattern which ensures a balanced traffic load in all dimensions in the network so that the all-to-all broadcast algorithm can achieve a very tight near-optimal transmission time. The algorithm also takes advantage of overlapping of message switching time and transmission time, and the total communication delay asymptotically matches the lower bound of all-to-all broadcast. Finally, the algorithm is conceptually simple and symmetrical for every message and every node so that it can be easily implemented in hardware and achieves the near-optimum in practice.
['Yuanyuan Yang', 'Jianchao Wang']
Near-optimal all-to-all broadcast in multidimensional all-port meshes and tori
398,919
This paper develops a convex optimization method to analyze the feasibility of a nonconvex bilinear matrix inequality (BMI), which is traditionally treated as a NP hard problem. First, a sufficient condition for the convexity of a quadratic matrix inequality (QMI), which is a more general semidefinite constraint than a BMI, is presented. It will be shown that the satisfaction of sufficient convexity condition implies that the QMI constraint can be transferred into an equivalent linear matrix inequality (LMI) constraint, which can be efficiently solved by well-developed interior-point algorithms. This result constitutes perhaps the first systematic methodology to verify the convexity of QMIs in the literature of semidefinite programming (SDP) in Control. For the BMI problem, a method to derive a convex inner approximation is discussed. The BMI feasibility analysis method is then applied to a nonlinear observer design problem where the estimation error dynamics is transformed into a Lure system with a sector condition constructed from the element-wise bounds on the Jacobian matrix of the nonlinearities. The developed numerical algorithm is used to design a nonlinear observer that satisfies multiple performance criteria simultaneously.
['Yan Wang', 'Rajesh Rajamani']
Feasibility analysis of the bilinear matrix inequalities with an application to multi-objective nonlinear observer design
975,787
High-resolution FMCW reflectometry is often realized by sampling the beat signal with a clock signal generated from an auxiliary interferometer. The drawback of this system is that the measurement range is limited to less than half of the optical path difference of the auxiliary interferometer to satisfy the Sampling theorem. We propose and demonstrate a method to extend the measurement range of the system. The clock signal gerenerated from the auxiliary interferometer is electronically frequency-multipled by using a PLL circuit. The measurement range is experimentally extended by a factor of 20 while keeping high spatial resolution, and is theoretically extended by a factor of 128. The advantage of the proposed system is that the optical path difference of the auxiliary interferometer can be kept short, which is very effective for obtaining the stable and low time-jitter clock signal.
['Koichi Iiyama', 'Makoto Yasuda', 'Saburo Takamiya']
Extended-Range High-Resolution FMCW Reflectometry by Means of Electronically Frequency-Multiplied Sampling Signal Generated from Auxiliary Interferometer
153,884
This paper describes a region-growing algorithm for the segmentation of large lesions in T/sub 1/-weighted magnetic resonance (MR) images of the head. The algorithm involves a gray level similarity criterion to expand the region and a size criterion to prevent from over-growing outside the lesion. The performance of tile algorithm is evaluated and validated on a series of pathologic three-dimensional MR images of the head.
['S. A. Hojjatoleslami', 'Frithjof Kruggel']
Segmentation of large brain lesions
96,452
For such biomedical applications as single cell manipulation and targeted delivery of chemicals, it is important to fabricate microstructures that can be powered and controlled without a tether in fluidic environments. In this work, we describe the construction and operation of micronsized, biocompatible ferromagnetic microtransporters driven by external magnetic fields capable of exerting forces at the pico Newton scale. We develop microtransporters using a simple, single step micro fabrication technique that allows us to produce large numbers in the same step. We also fabricate microgels to deliver drugs. We demonstrate that the microtransporters can be navigated to separate individual targeted cells with micron-scale precision, and deliver microgels without disturbing the cells in the neighborhood and the local microenvironment.
['Mahmut Selman Sakar', 'Edward B. Steager', 'Anthony Cowley', 'Vijay Kumar', 'George J. Pappas']
Wireless manipulation of single cells using magnetic microtransporters
261,081
In this paper, a robust image watermarking scheme based on curvelet transform is proposed. Considering the frequency and orientational sensitivity of the human visual system(HVS), we present a HVS model in curvelet domain,which is used to control the embedding strength. The watermark is embedded to the selected coefficients of the curvelet transform, and original image is not required in the extracting process. The experimental results demonstrate that the proposed algorithm can provide excellent robustness against most image processing methods.
['Yi Xiao', 'Lee-Ming Cheng', 'L. L. Cheng']
A Robust Image Watermarking Scheme Based on a Novel HVS Model in Curvelet Domain
459,875
The proposed MGlaber method is based on observation of the behavior of mites called Macrocheles glaber (Muller, 1860). It opens the series of optimization methods inspired by the behavior of mites, which we have given a common name: Artificial Acari Optimization. Acarologists observed three stages the ovoviviparity process consists of, i.e.: preoviposition behaviour, oviposition behaviour (which is followed by holding an egg below the gnathosoma) and hatching of the larva sup- ported by the female. It seems that the ovoviviparity phenomenon in this species is favoured by two factors, i.e.: poor feeding and poor quality of substrate. Experimental tests on a genetic algorithm were carried out. The MGlaber method was worked into a genetic algorithm by replacing crossig and mutation methods. The obtained results indicate to signif- icant increase in the algorithm convergence without side-effects in the form of stopping of evolution at local extremes. The experiment was carried out one hundred times on random starting populations. No sig- nificant deviations of the measured results were observed. The research demonstrated significant increase in the algorithm operation speed. Con- vergence of evolution has increased about ten times. It should be noted here that MGlaber method was not only or even not primarily created for genetic algorithms. The authors perceive large potential for its applica- tion in all optimization methods where the decision about further future of the solutions is taken as a result of the evaluation of the objective function value. Therefore the authors treat this paper as the beginning of a cycle on Artificial Acari Optimization, which will include a series of methods inspired by behaviour of different species of mites.
['Jacek M. Czerniak', 'Dawid Ewald']
A New MGlaber Approach as an Example of Novel Artificial Acari Optimization
836,297
Many optimization algorithms have been developed by drawing inspiration from swarm intelligence (SI). These SI-based algorithms can have some advantages over traditional algorithms. In this paper, we carry out a critical analysis of these SI-based algorithms by analyzing their ways to mimic evolutionary operators. We also analyze the ways of achieving exploration and exploitation in algorithms by using mutation, crossover and selection. In addition, we also look at algorithms using dynamic systems, self-organization and Markov chain framework. Finally, we provide some discussions and topics for further research.
['Xin-She Yang']
Swarm Intelligence Based Algorithms: A Critical Analysis
139,235
Power gating has been widely employed to reduce subthreshold leakage. Data retention elements (flip-flops and isolation circuits) are used to preserve circuit states during standby mode, if the states are needed again after wake-up. These elements must be controlled by an external power management unit, causing a network of control signals implemented with extra wires and buffers. A power-gated circuit with autonomous data retention (APG) is proposed to remove the overhead involved in control signals. Retention elements in APG derive their control by detecting rising potential of virtual ground rails when power gating starts, i.e., they control themselves without explicit control signals. Design of retention elements for APG is addressed to facilitate safe capturing of circuit states. Experiments with 65-nm technology demonstrate that, compared to standard power gating, total wirelength, and average wiring congestion are reduced by 8.6% and 4.1% on average, respectively, at a cost of 6.8% area increase. In order to fast charge virtual ground rails, a pMOS switch driven by a short pulse is employed to directly provide charges to virtual ground. This helps retention elements avoid short-circuit current while making transition to standby mode. The optimization procedure for sizing pMOS switch and deciding pulse width is addressed, and assessed with 65-nm technology. Experiments show that, compared to standard power gating, APG reduces the delay to enter and exit the standby mode by 65.6% and 28.9%, respectively, with corresponding energy dissipation during the period cut by 46.1% and 36.5%. Standby mode leakage power consumption is also reduced by 15.8% on average.
['Jun Seomun', 'Youngsoo Shin']
Design and Optimization of Power-Gated Circuits With Autonomous Data Retention
224,853
['Marian Cristian Mihaescu']
Software Architectures of Applications Used for Enhancing On-line Educational Environments.
803,262
This paper proposed an edge detection scheme which is deduced from Fresnel diffraction. Analysis in this paper shows that Fresnel convolution kernel function performs well on edge enhancement when images are transformed into complex functions. Due to its mathematical complexity, the method is simplified into a linear convolution filter. The new edge detector is designed based on the simplified linear filter. Experimental results indicate that the new detector gives quantitative results equal to the Canny detector while it is more simple to be implemented.
['Lu Hong Diao', 'Bin Yu', 'Hua Li']
A new edge detector based on Fresnel diffraction
204,294
We know that large amount of data and information should be transmitted through internet during transactions in E-Governance. Smart E-Governance system should deliver speedy, space efficient, cost effective and secure services among other governments and its citizens utilizing benefits of Information and Communication Technologies (ICT). This paper proposes to develop a space efficient and secured data transmission scheme using Modified Huffman algorithm for compression, which will also yield better bandwidth utilization and inner encryption technique with one way hash function SHA (Secured Hash Algorithm) to ensure Message integrity.
['Nikhilesh Barik', 'Sunil Karforma', 'J. K. Mondal', 'Arpita Ghosh']
Towards Design and Implementation of Space Efficient and Secured Transmission scheme on EGovernance data
629,691
We present Obstacle-aware Virtual Circuit geographic Routing (OVCR), a novel routing mechanism for obstacle-aware wireless sensor networks that uses the obstacle-free path computed by the sink to make the greedy forwarding of geographic routing dependable. OVCR discovers obstacle-free paths by the powerful sink as obstacles are detected. To bypass obstacles, the information of obstacle-free paths is forwarded to sensors close to obstacles for finding virtual circuits in a distributed manner. A virtual circuit containing a sequence of relay positions (temporary destinations) is used for obstacle-free data packet forwarding. Data packets can then be forwarded greedily, either by the location of the destination, or the relay position. Simulation results show that OVCR is more dependable since the success rate of packet forwarding is raised and more efficient since the hop count is reduced.
['Ming-Tsung Hsu', 'Frank Yeong-Sung Lin', 'Yue-Shan Chang', 'Tong-Ying Juang']
Reliable Greedy Forwarding in Obstacle-Aware Wireless Sensor Networks
38,173
The multidimensional Potential Energy Hypersurface (PEHS) for the cyclononane molecule was comprehensively investigated at the Hartree-Fock (HF), and Density Functional Theory (DFT) levels of theory. Second-order Moller-Plesset perturbation theory (MP2) optimizations were also carried out to confirm the low-energy conformations. The previously reported Geometrical Algorithm to Search Conformational Space (GASCOS) has been used to generate the starting geometries for the conformational analysis. The GASCOS algorithm combined with ab initio and DFT optimization permits searching of the potential energy hypersurface for all minimum-energy conformations as well as transition structures connecting the low-energy forms. The search located all previously reported structures together with 11 transition states, some of which were not found by earlier searching techniques. Altogether, 16 geometries (five low-energy conformations and 11 transition states) were found to be important for a description of the conformational features of cyclononane. RB3LYP/aug-cc-pVTZ//RB3LYP/6-31G(d) calculations suggest a conformational mixture between the twist boat-chair and twist chair-boat conformations as the preferred forms. In addition only the twist chair-chair conformation with 1.52 kcal/mol above the global minimum should contribute somewhat to the equilibrium mixture of conformations. Our results allow us to form a concise idea about the internal intricacies of the 9D vector space describing the conformation of cyclononane as well as the associated conformational potential energy hypersurface of nine independent variables.
['Fernando D. Suvire', 'Luis N. Santagata', 'José A. Bombasaro', 'Ricardo D. Enriz']
Dynamics of flexible cycloalkanes. Ab initio and DFT study of the conformational energy hypersurface of cyclononane.
227,194
We present an area filling mobile robot system for indoor environment based on fuzzy logic and behavioral control using up-to-date environmental heterogeneous information perceived from range sensors and vision system. For gathering environmental information in the proximity of the robot and avoiding collisions with obstacles when navigating, infrared and ultrasonic sensors are utilized on the mobile robot system. Based on human expert knowledge, several basic behaviors for area filling are implemented using fuzzy inference. By exploiting straight lines features found on standard suspended ceilings, orientation error is corrected effectively while following a path during area filling. A finite state machine mechanism is used to control the behavior transitions and accomplish the area filling in the desired region. Experiments indicate the effectiveness of the system.
['Yili Fu', 'Sherman Y. T. Lang']
Fuzzy logic based mobile robot area filling with vision system for indoor environments
470,719
Progressive alignment is a widely used approach for computing multiple sequence alignments (MSAs). However, aligning several hundred or thousand sequences with popular progressive alignment tools such as ClustalW requires hours or even days on state-of-the-art workstations. This paper presents MSA-CUDA, a parallel MSA program, which parallelizes all three stages of the ClustalW processing pipeline using CUDA and achieves significant speedups compared to the sequential ClustalW for a variety of large protein sequence datasets. Our tests on a GeForce GTX 280 GPU demonstrate average speedups of 36.91 (for long protein sequences), 18.74 (for average-length protein sequences), and 11.27 (for short protein sequences) compared to the sequential ClustalW running on a Pentium 4 3.0 GHz processor. Our MSA-CUDA outperforms ClustalW-MPI running on 32 cores of a high performance workstation cluster.
['Yongchao Liu', 'Bertil Schmidt', 'Douglas L. Maskell']
MSA-CUDA: Multiple Sequence Alignment on Graphics Processing Units with CUDA
531,248
We target power dissipation in field-programmable gate array (FPGA) interconnect and present three approaches that leverage a unique property of FPGAs, namely, the presence of unused routing conductors. A first technique attacks dynamic power by placing unused conductors, adjacent to used conductors, into a high-impedance state, reducing the effective capacitance seen by used conductors. A second technique, charge recycling, re-purposes unused conductors as charge reservoirs to reduce the supply current drawn for a positive transition on a used conductor. A third approach reduces leakage current in interconnect buffers by pulse-based signalling, allowing a driving buffer to be placed into a high impedance stage after a logic transition. All three techniques require CAD support in the routing stage to encourage specific positionings of unused conductors relative to used conductors.
['Safeen Huda', 'Jason Helge Anderson']
Power Optimization of FPGA Interconnect Via Circuit and CAD Techniques
698,311
['Agnieszka Szczęsna', 'Przemysław Skurowski', 'Przemysław Pruszowski', 'Damian Peszor', 'Marcin Paszkuta', 'Konrad Wojciechowski']
Reference Data Set for Accuracy Evaluation of Orientation Estimation Algorithms for Inertial Motion Capture Systems
868,454
Dynamic taint analysis has been proved to be very effective in solving security problems recently, especially in software vulnerability detection and malicious behavior prevention. Unfortunately, most of current researches in this field focus on the runtime protection, and are incapable to discover the potential threat in the software. This paper describes a novel approach to overcome the limitation of traditional dynamic taint analysis by integrating static analysis into the system and presents framework SDCF. The framework translates the binary into assembly code and tracks the data flow. Then with static method, the system can get the important information which can't be gained at runtime, such as unexecuted part of the code. When this information is acquired, they will be provided to the client tools. The practicability of the framework is validated by implementing and evaluating a tool built on SDCF. The result of the experiments shows that our system is able to detect latent software vulnerabilities efficiently.
['Ruoyu Zhang', 'Shiqiu Huang', 'Zhengwei Qi', 'Haibing Guan']
Combining Static and Dynamic Analysis to Discover Software Vulnerabilities
98,388
A robot in the home of an elderly person providing assistive care will face many difficult decisions. I focus on a set of tasks that are very common and often stressful. Medication management tasks are ideal for a robot to assist in, but even a task with straightforward guidelines and goals can have numerous moral issues brought about by the social interaction between the human and the robot. Building on artificial intelligence work in production systems, decision theory, and analogical reasoning I am developing the architectural components necessary for these interactions. These components are based on computational models that have been informed by work in psychology and occupational therapy. It is my goal that by bringing these disciplines together I will be able to help design robots that are more morally acceptable, safer, and overall do a better job at assisting those in need.
['Jason R. Wilson']
Robot Assistance in Medication Management Tasks
722,630
Deep Learning (DL) is becoming popular in a wide range of domains. Many emerging applications, ranging from image and speech recognition to natural language processing and information retrieval, rely heavily on deep learning techniques, especially the Neural Networks (NNs). NNs have led to great advances in recognition accuracy compared with other traditional methods in recent years. NN-based methods demand much more computation and memory resource, and therefore a number of NN accelerators have been proposed on CMOS-based platforms, such as FPGA and GPU [1]. However, it becomes more and more difficult to obtain substantial power efficiency and gains directly through the scaling down of traditional CMOS technique. Meanwhile, the large data amount in DL applications also meets an ever-increasing "memory wall" challenge because of the efficiency of von Neumann architecture. Consequently, there is a growing research interest of exploring emerging nano-devices and new computing architectures to further improve power efficiency [2].
['Yu Wang', 'Lixue Xia', 'Ming Cheng', 'Tianqi Tang', 'Boxun Li', 'Huazhong Yang']
RRAM based learning acceleration
904,834
['Mohamed Abdel-Nasser', 'Antonio Moreno', 'Domenec Puig']
Analysis of the evolution of breast tumours using strain tensors.
800,125
In this paper we describe our approach for developing a QoS-aware, dependable execution environment for large-scale distributed stream processing applications. Distributed stream processing applications have strong timeliness and security demands. In particular, we address the following challenges: (1) propose a real-time dependable execution model by extending the component-based execution model with real-time and dependability properties, and (2) develop QoS-aware application composition and adaptation techniques that employ resource management strategies and security policies when discovering and selecting application components. Our approach enables us to develop a distributed stream processing environment that is predictable, secure, flexible and adaptable.
['Vana Kalogeraki', 'Dimitrios Gunopulos', 'Ravi S. Sandhu', 'Bhavani M. Thuraisingham']
QoS Aware Dependable Distributed Stream Processing
43,396
['Leibo Li', 'Jiazhe Chen', 'Xiaoyun Wang']
Multiplied Conditional Impossible Differential Attack on Reduced-Round Camellia.
785,647
This paper addresses an Electric Vehicle Relocation Problem (E-VReP), in one-way carsharing systems, based on operators who use folding bicycles to facilitate vehicle relocation. In order to calculate the economic sustainability of this relocation approach, a revenue associated with each relocation request satisfied and a cost associated with each operator used are introduced. The new optimization objective maximizes the total profit. To overcome the drawback of the high CPU time required by the Mixed Integer Linear Programming formulation of the E-VReP, two heuristic algorithms, based on the general properties of the feasible solutions, are designed. Their effectiveness is tested on two sets of realistic instances. In the first, all the requests have the same revenue, while, in the second, the revenue of each request has a variable component related to the user’s rent-time and a fixed part related to customer satisfaction. Finally, a sensitivity analysis is carried out on both the number of requests and the fixed revenue component.
['Maurizio Bruglieri', 'F. Pezzella', 'Ornella Pisacane']
Heuristic algorithms for the operator-based relocation problem in one-way electric carsharing systems
691,944
A practical built-in current sensor (BICS) is described that senses the voltage drop on supply lines caused by quiescent current leakage. This noninvasive procedure avoids any performance degradation. The sensor performs analog-to-digital conversion of the input signal using a stochastic process, with scan chain readout. Self-calibration and digital chopping are used to minimize offset and low frequency noise and drift. The measurement results of a 350 nm test chip are described. The sensor achieves a resolution of 182 muA, with the promise of much higher resolution
['Bin Xue', 'D. M. H. Walker']
I/sub DDQ/ test using built-in current sensing of supply line voltage drop
353,214
The evolution of the World Wide Web has progressed from simple, classic web pages with text and static images only to Web 2.0 pages with rich multimedia content, mashups and desktop-style applications. The cornerstone of Web 2.0 technologies is an API called XMLHttpRequest – an interface that allows network requests to be performed asynchronously without blocking the user interface of the web browser. In this paper we introduce anew operation called AsyncHttpEvalRequest that is the logical extension of XMLHttpRequest for web applications.The main benefit of the new operation is that it allows large web applications to be downloaded incrementally and more securely.
['Janne Kuuskeri', 'Tommi Mikkonen', 'Antero Taivalsaari']
AsyncHttpEvalRequest: A New Primitive for Downloading Web Applications Incrementally and Securely
102,193
The two single-carrier line codes which have recently been considered for VDSL standards are quadrature amplitude modulation (QAM) and carrierless amplitude/phase modulation (CAP). They have the same signal constellation and transmitted spectral shape. However, experiments have shown that the equalizer convergence rates for QAM and CAP can be different. These convergence rates can impact both the training sequence length requirements and the equalizer tracking ability for time-varying channels. We investigate a variety of equalizer architectures and algorithms and study the training sequence equalizer convergence behavior of QAM- and CAP-based VDSL systems by using both theoretical analysis and Monte Carlo computer simulations. We show that the particular equalizer structure and channel have important effects on the convergence behavior. Based on our simulations for various VDSL test loops, we recommend using the CAP matched filtered phase-splitting equalizer due to its good convergence speed and relatively low complexity
['Lee M. Garth', 'Fan Li']
A comparison of QAM and CAP equalizers for VDSL
87,524
Credit has been considered as one of the most important problems in e-commerce. There is less research on B2B e-commerce credit evaluation model and algorithm than C2C or B2C. A new enterprise credit evaluation model and algorithm for B2B e-commerce based on intermediate Web site is proposed. It is assumed that an enterprise's credit in the B2B e-commerce environment is composed of basic credit and transaction credit, and detailed transaction data are available and can be accessed for credit evaluation purpose. According to the BP neural network principles, the models with 3 layers are established to evaluate quantitative-indicators-based credit. Compared with basic credit, an enterprise's transaction credit is more practical for its potential trading partners on intermediate B2B website. Experimental results demonstrate the effectiveness of the new credit evaluation model and algorithm.
['Chunhui Piao', 'Changyou Zhang', 'Xufang Han', 'Jing An']
Research on Credit Evaluation Model and Algorithm for B2B e-Commerce
482,873
A key feature of Web 2.0 is the possibility of sharing, creating and editing on-line content. This approach is increasingly used in learning environments to favor interaction and cooperation among students. These functions should be accessible as well as easy to use for all participants. Unfortunately accessibility and usability issues still exist for Web 2.0-based applications. For instance, Wikipedia presents many difficulties for the blind. In this paper we discuss a possible solution for simplifying the Wikipedia editing page when interacting via screen reader. Building an editing interface that conforms to W3C ARIA (Accessible Rich Internet Applications) recommendations would overcome accessibility and usability problems that prevent blind users from actively contributing to Wikipedia.
['M. Claudia Buzzi', 'Marina Buzzi', 'Barbara Leporini', 'Caterina Senette']
Making Wikipedia editing easier for the blind
2,566
Motion capture shoots involve a wide range of technology and entertainment production systems such as motion capture cameras, tracking software and digital environments to create entertainment applications. However, acting in this high-tech environment is still traditional and brings its own challenges to the actors. Good acting and imagination skills are highly needed for many motion capture shoots to deliver satisfying results. In our research, we are exploring how to support the actors and use a head-mounted projection display to create a mixed reality application helping actors to perform during motion capture shoots. This paper presents the latest enhancements of our head-mounted projection display application and discusses the use of this technology for motion capture acting as well as the potential use for entertainment purposes.
['Daniel Kade', 'Rikard Lindell', 'Hakan Urey', 'Oğuzhan Özcan']
Acting 2.0: when entertainment technology helps actors to perform
832,883
['Luca Mainetti', 'Roberto Paiano', 'Stefania Pasanisi', 'Roberto Vergallo']
Modeling of Complex Taxonomy: A Framework for Schema-Driven Exploratory Portal
856,010
This paper focuses on the employment of analysis-suitable T-spline surfaces of arbitrary degree for performing structural analysis of fully nonlinear thin shells. Our aim is to bring closer a seamless and flexible integration of design and analysis for shell structures. The local refinement capability of T-splines together with the Kirchhoff–Love shell discretization, which does not use rotational degrees of freedom, leads to a highly efficient and accurate formulation. Trimmed NURBS surfaces, which are ubiquitous in CAD programs, cannot be directly applied in analysis, however, T-splines can reparameterize these surfaces leading to analysis-suitable untrimmed T-spline representations. We consider various classical nonlinear benchmark problems where the cylindrical and spherical geometries are exactly represented and point loads are accurately captured through local hh-refinement. Taking advantage of the higher inter-element continuity of T-splines, smooth stress resultants are plotted without using projection methods. Finally, we construct various trimmed NURBS surfaces with Rhino, an industrial and general-purpose CAD program, convert them to T-spline surfaces, and directly use them in analysis.
['Hugo Casquero', 'Lei Liu', 'Yongjie Zhang', 'A. Reali', 'Josef Kiendl', 'Hector Gomez']
Arbitrary-degree T-splines for isogeometric analysis of fully nonlinear Kirchhoff–Love shells ☆
903,271
This paper explores algorithms for subspace clustering with missing data. In many high-dimensional data analysis settings, data points Lie in or near a union of subspaces. Subspace clustering is the process of estimating these subspaces and assigning each data point to one of them. However, in many modern applications the data are severely corrupted by missing values. This paper describes two novel methods for subspace clustering with missing data: (a) group-sparse sub-space clustering (GSSC), which is based on group-sparsity and alternating minimization, and (b) mixture subspace clustering (MSC), which models each data point as a convex combination of its projections onto all subspaces in the union. Both of these algorithms are shown to converge to a local minimum, and experimental results show that they outperform the previous state-of-the-art, with GSSC yielding the highest overall clustering accuracy.
['Daniel L. Pimentel-Alarcon', 'Laura Balzano', 'Roummel F. Mareia', 'Robert D. Nowak', 'Rebecca Willett']
Group-sparse subspace clustering with missing data
882,744
We consider coupled systems consisting of a well-posed and strictly proper subsystem and a finite-dimensional subsystem connected in feedback. The external world interacts with the coupled system via the finite-dimensional part, which receives the external input and sends out the output. We allow the external input and the feedback signal from the infinite-dimensional part to be two separate inputs of the finite-dimensional part. We consider the output of the infinite-dimensional part as an additional output. Under several assumptions, we derive well-posedness, regularity and generic exact controllability results for such coupled systems.
['Xiaowei Zhao', 'George Weiss']
Well-posedness and generic exact controllability of coupled systems
109,234
A general supervision system is developed for the whole process of food traceability based on Atom technology in this paper. The design methods of different hardware units are described based on X86 IA (Intel architecture) and modulated function. A software design method was proposed for pressure data process of touch screen. In addition, take fruits and vegetables traceability system supplied to Hong Kong of Shenzhen ports for experimental platform, a concrete application is introduced. The experimental results demonstrate the effectiveness and practicality of the developed system.
['Xianyu Bao', 'Qing Lu', 'Yang Wang', 'Weimin Zheng']
Food traceability: General supervision system and applications
75,449
Keywords are indexed automatically for large-scale categorization corpora. Indexed keywords of more than 20 documents are selected as seed words, thus overcoming subjectivity of selecting seed words in clustering; at the same time, clustering is limited to particular category corpora and keywords indexed feature extraction method is adopted to obtain domanial words automatically, thus reducing noise of similarity calculation
['Liu Hua']
Words Clustering Based on Keywords Indexing from Large-scale Categorization Corpora
24,706
Telehealth provides an opportunity to reduce healthcare costs through remote patient monitoring, but is not appropriate for all individuals. Our goal was to identify the patients for whom telehealth has the greatest impact. Challenges included the high variability of medical costs and the effect of selection bias on the cost difference between intervention patients and controls. Using Medicare claims data, we computed cost savings by comparing each telehealth patient to a group of control patients who had similar healthcare resource utilization. These estimates were then used to train a predictive model using logistic regression. Filtering the patients based on the model resulted in an average cost savings of $10K, an improvement over the current expected loss of $2K (without filtering).
['Martha Ganser', 'Sauptik Dhar', 'Unmesh Kurup', 'Carlos Cunha', 'Aca Gacic']
Patient Identification for Telehealth Programs
666,970
We present a new method to estimate the clutter-plus-noise covariance matrix used to compute an adaptive filter in space-time adaptive processing (STAP). The method computes a ML estimate of the clutter scattering coefficients using a Bayesian framework and knowledge on the structure of the covariance matrix. A priori information on the clutter statistics is used to regularize the estimation method. Other estimation methods based on the computation of the power spectrum using for instance the periodogram are compared to our method. The result in terms of SINR loss shows that the proposed method outperforms the other ones.
['Xavier Neyt', 'Marc Acheroy', 'Jacques Verly']
Maximum Likelihood Range Dependence Compensation for STAP
290,532
Holding biological motion BM, the movements of animate entities, in working memory WM is important to our daily social life. However, how BM is maintained in WM remains unknown. The current study investigated this issue and hypothesized that, analogous to BM perception, the human mirror neuron system MNS is involved in rehearsing BM in WM. To examine the MNS hypothesis of BM rehearsal, we used an EEG index of mu suppression 8-12 Hz, which has been linked to the MNS. Using a change detection task, we manipulated the BM memory load in three experiments. We predicted that mu suppression in the maintenance phase of WM would be modulated by the BM memory load; moreover, a negative correlation between the number of BM stimuli in WM and the degree of mu suppression may emerge. The results of Experiment 1 were in line with our predictions and revealed that mu suppression increased as the memory load increased from two to four BM stimuli; however, mu suppression then plateaued, as WM could only hold, at most, four BM stimuli. Moreover, the predicted negative correlation was observed. Corroborating the findings of Experiment 1, Experiment 2 further demonstrated that once participants used verbal codes to process the motion information, the mu suppression or modulation by memory load vanished. Finally, Experiment 3 demonstrated that the findings in Experiment 1 were not limited to one specific type of stimuli. Together, these results provide evidence that the MNS underlies the process of rehearsing BM in WM.
['Zaifeng Gao', 'Shlomo Bentin', 'Mowei Shen']
Rehearsing biological motion in working memory: An eeg study
34,690
Due to high demand uncertainty, excess inventory has been a key issue in inventory control. Caterpillar developed the dealers' parts inventory sharing (DPIS) and returns programs to help dealers cope with excess inventory. However, historical data show that the current returns policy has been very costly to Caterpillar due to the distribution strategy. In this project, we develop alternative returns policies and propose to use simulation to analyze the cost structure of the alternative policies, develop cost sharing schemes, and compare performance with the current policy under different scenarios. It is shown that the simulation tool we developed provides industry managers with a test ground for new returns strategies and the output analysis presents guidelines to set parameters when using the new strategies to manage returns distribution.
['Hui Zhao']
Simulation and analysis of dealers' returns distribution strategy
20,164
This paper describes a parsing model for speech with repairs that makes a clear separation between linguistically meaningful symbols in the grammar and operations specific to speech repair in the operation of the parser. This system builds a model of how unfinished constituents in speech repairs are likely to finish, and finishes them probabilistically with placeholder structure. These modified repair constituents and the restarted replacement constituent are then recognized together in the same way that two coordinated phrases of the same type are recognized.
['Timothy A. Miller', 'Luan Nguyen', 'William Schuler']
Parsing Speech Repair without Specialized Grammar Symbols
179,504
The UAVs (Unmanned Aerial Vehicles) market is projected to grow, sustained by the technological progress in different domains related to UAVs and by the emergence of new civilian applications. However, this economical development might be held back due to increased regulation constraints. A major concern of public authorities is to ensure a safe sharing of the airspace, especially over populated areas. To reach this aim, a fundamental mechanism is to provide a permanent tracking of UAVs. In this paper, we investigate the path planning of autonomous UAVs with tracking capabilities provided by terrestrial wireless networks. We formalize this problem as a constrained shortest path problem, where the objective is to minimize the delay for reaching a destination, while ensuring a certain delivery ratio of messages reporting the drone's positions.
['Mustapha Bekhti', 'Marwen Abdennebi', 'Nadjib Achir', 'Khaled Boussetta']
Path planning of unmanned aerial vehicles with terrestrial wireless network tracking
725,084
In the paper, a robust and efficient system identification method is proposed for a state-space model with heavy-tailed process and measurement noises by using the maximum likelihood criterion. An expectation maximization algorithm for a state-space model with heavy-tailed process and measurement noises is derived by treating auxiliary random variables as missing data, based on which a new nonlinear system identification method is proposed. Noise parameter estimations are updated analytically and model parameter estimations are updated approximately based on the Newton method. The effectiveness of the proposed method is illustrated in a numerical example concerning a univariate non-stationary growth model.
['Yulong Huang', 'Yonggang Zhang', 'Ning Li', 'Syed Mohsen Naqvi', 'Jonathon A. Chambers']
A robust and efficient system identification method for a state-space model with heavy-tailed process and measurement noises
880,691
Rooted triplets are becoming one of the most important types of input for reconstructing rooted phylogenies. A rooted triplet is a phylogenetic tree on three leaves and shows the evolutionary relationship of the corresponding three species. In this paper, we investigate the problem of inferring the maximum consensus evolutionary tree from a set of rooted triplets. This problem is known to be APX-hard. We present two new heuristic algorithms. For a given set of m triplets on n species, the FastTree algorithm runs in O(m + α(n)n2) time, where α(n) is the functional inverse of Ackermann’s function. This is faster than any other previously known algorithms, although the outcome is less satisfactory. The Best Pair Merge with Total Reconstruction (BPMTR) algorithm runs in O(mn3) time and, on average, performs better than any other previously known algorithms for this problem.
['Soheil Jahangiri', 'Seyed Naser Hashemi', 'Hadi Poormohammadi']
New Heuristics for Rooted Triplet Consistency
115,555
Staged at Piombino, Italy in September 2015, euRathlon 2015 was the world’s first multi-domain (air, land and sea) multi-robot search and rescue competition. In a mock-disaster scenario inspired by the 2011 Fukushima NPP accident, the euRathlon 2015 Grand Challenge required teams of robots to cooperate to map the area, find missing work- ers and stem a leak. In this paper we outline the euRathlon 2015 Grand Challenge and the approach used to benchmark and score teams. We conclude the paper with an evaluation of both the competition and the performance of the robot-robot teams in the Grand Challenge.
['Alan F. T. Winfield', 'Marta Palau Franco', 'Bernd Brüeggemann', 'A. Castro', 'Miguel Cordero Limon', 'Gabriele Ferri', 'Fausto Ferreira', 'Xingkun Liu', 'Yvan Petillot', 'Juha Röning', 'Frank E. Schneider', 'E. Stengler', 'Dario Sosa', 'A. Viguria']
euRathlon 2015: A Multi-domain Multi-robot Grand Challenge for Search and Rescue Robots
861,176
['Cristina Guerrero', 'Georgina Tryfou', 'Maurizio Omologo']
Channel Selection for Distant Speech Recognition Exploiting Cepstral Distance.
881,561
In this paper, we investigate the performance of bit-interleaved coded multiple beamforming (BICMB). We provide interleaver design criteria such that BICMB achieves full spatial multiplexing of min( N, M) and full spatial diversity of NM with N transmit and M receive antennas over quasi-static Rayleigh flat fading channels. If the channel is frequency selective, then BICMB is combined with orthogonal frequency division multiplexing (OFDM) (BICMB-OFDM) in order to combat ISI caused by the frequency-selective channels. BICMB-OFDM achieves full spatial multiplexing of min(N, M), while maintaining full spatial and frequency diversity of NML for an NtimesM system over L-tap frequency-selective channels when an appropriate convolutional code is used. Both systems analyzed in this paper assume perfect channel state information both at the transmitter and the receiver. Simulation results show that, when the perfect channel state information assumption is satisfied, BICMB and BICMB-OFDM provide substantial performance or complexity gains when compared to other spatial multiplexing and diversity systems.
['Enis Akay', 'Ersin Sengul', 'Ender Ayanoglu']
Bit Interleaved Coded Multiple Beamforming
44,438
We consider the problem of distributed multitask learning, where each machine learns a separate, but related, task. Specifically, each machine learns a linear predictor in high-dimensional space, where all tasks share the same small support. We present a communication-efficient estimator based on the debiased lasso and show that it is comparable with the optimal centralized method.
['Jialei Wang', 'Mladen Kolar', 'Nathan Srerbo']
Distributed Multi-Task Learning
729,641
When transposing large matrices using SDRAM memories, typically a control overhead significantly reduces the data throughput. In this paper, a new address mapping scheme is introduced, taking advantage of multiple banks and burst capabilities of modern SDRAMs. Other address mapping strategies minimize the total number of SDRAM page-opens while traversing the two-dimensional index-space in row or column direction. The new approach uses bank interleaving methods to hide wait cycles, caused by page-opens. In this way, data bus wait cycles do not depend on the overall number of page-opens directly. It is shown, that the data bus utilization can be increased significantly, in particular, if memories are accessed with parallel samples. Therefore, double buffering can be omitted. As a special operation, 2D-FFT processing for radar applications is considered. Depending on SDRAM parameters and dimensions, a continuous bandwidth utilization of 96 to 98% is achieved for accesses in both matrix directions, including all refresh commands.
['Stefan Langemeyer', 'Peter Pirsch', 'Holger Blume']
Using SDRAMs for two-dimensional accesses of long 2 n × 2 m -point FFTs and transposing
454,582
Antibacterial resistance has been progressively increasing mostly due to selective antibiotic pressure, forcing pathogens to either adapt or die. The development of antibacterial resistance to last-line antibiotics urges the formulation of alternative strategies for drug discovery. Recently, attention has been devoted to the development of computational methods to predict drug-target interactions (DTIs). Here we present a computational strategy to predict proteome-scale DTIs based on the combination of the drugs' chemical features and substructural fingerprints, and on the structural information and physicochemical properties of the proteins. We propose an ensemble learning combination of Support-Vector Machine and Random Forest to deal with the complexity of DTI classification. Two distinct classification models were developed to ascertain whether taking the type of protein target (i.e., enzymes, g-protein-coupled receptors, ion channels and nuclear receptors) into account improves classification performance. External validation analysis was consistent with internal five-fold cross-validation, with an AUC of 0.87. This strategy was applied to the proteome of methicillin-resistant Staphylococcus aureus COL (MRSA COL, taxonomy id: 93062), a major nosocomial pathogen worldwide whose antimicrobial resistance and incidence rate keeps steadily increasing. Our predictive framework is available at http://bioinformatics.ua.pt/software/dtipred.
['Edgar D. Coelho', 'Joel P. Arrais', 'Jose Luis Oliveira']
Ensemble-Based Methodology for the Prediction of Drug-Target Interactions
868,829
The nature and degree of competition in an industry hinge on four basic forces: Production, Technology, Marketing and Research & Development (R&D). To establish a strategic agenda for dealing with these contending currents and to grow despite them, a company must understand how they work in its industry and how they affect the company in its particular situation. This study adopts Fuzzy MCDM methods and details how these forces operate and suggests ways of ad- justing to them, and, where possible, of taking advantage of them. Knowledge of these underlying sources of competitive pressure provides the groundwork for a strategic agenda of action. The result highlights the critical strengths and weak- nesses of the company, animate the positioning of the company in its industry, clar- ify the areas where strategic changes may yield the greatest payoff, and highlight the places where industry trends promise to hold the greatest significance as either opportunities or threats.
['Mei-Chen Lo', 'Gwo-Hshiung Tzeng']
Fuzzy Hybrid MCDM for Building Strategy Forces
547,463
Carrier aggregation is a key feature of next generation wireless networks to deliver high-bandwidth links. This paper studies carrier aggregation for autonomous networks operating in shared spectrum. In our model, networks decide how many and which channels to aggregate in multiple frequency bands, hence extending the distributed channel allocation framework. Moreover, our model takes into the account physical layer issues, such as the out-of-channel interference in adjacent frequency channels and the cost associated with inter-band carrier aggregation. We propose learning algorithms that converge to Nash equilibria in a reasonable number of iterations under the assumption of incomplete and imperfect information.
['Hamed Ahmadi', 'Irene Macaluso', 'Luiz A. DaSilva']
Carrier aggregation as a repeated game: Learning algorithms for efficient convergence to a Nash equilibrium
309,532
In the research of multi-objective optimization algorithm, evolutionary algorithms have considered to be very successful tools. Artificial Immune System (AIS)-based algorithms as one of the viable alternative have also be widely developed in this domain. Over the years, researchers of evolutionary algorithms have extended their interest to many-objective situations; however works in AIS-based algorithms is rather scattered. This paper extends an AIS-based optimization algorithm to solve such many-objective optimization problems. The idea of -dominance and the holistic model of the immune network theory have been adopted to enhance the exploitation ability aiming for a quick convergence.
['Wilburn W. P. Tsang', 'Henry Y. K. Lau']
An Artificial Immune System-based Many-Objective Optimization Algorithm with Network Activation Scheme
311,313
['Jun Yamashita', 'Kazunobu Sato']
Automated Vehicles for Greenhouse Automation
814,050
Estimating principal curvatures and principal directions of a surface from a polyhedral approximation with a large number of small faces, such as those produced by iso-surface construction algorithms, has become a basic step in many computer vision algorithms, particularly in those targeted at medical applications. We describe a method to estimate the tensor of curvature of a surface at the vertices of a polyhedral approximation. Principal curvatures and principal directions are obtained by computing in closed form the eigenvalues and eigenvectors of certain 3/spl times/3 symmetric matrices defined by integral formulas, and closely related to the matrix representation of the tensor of curvature. The resulting algorithm is linear, both in time and in space, as a function of the number of vertices and faces of the polyhedral surface. >
['Gabriel Taubin']
Estimating the tensor of curvature of a surface from a polyhedral approximation
77,639
Independent vector analysis (IVA) is a recently proposed technique, an application of which is to solve the frequency domain blind source separation problem. Compared with the traditional complex-valued independent component analysis plus permutation correction approach, the largest advantage of IVA is that the permutation problem is directly addressed by IVA rather than resorting to the use of an ad hoc permutation resolving algorithm after a separation of the sources in multiple frequency bands. In this article, two updates for IVA are presented. First, a novel subband construction method is introduced, IVA will be conducted in subbands from high frequency to low frequency rather than in the full frequency band, the fact that the inter-frequency dependencies in subbands are stronger allows a more efficient approach to the permutation problem. Second, to improve robustness and against noise, the IVA nonlinearity is calculated only in the signal subspace, which is defined by the eigenvector associated with the largest eigenvalue of the signal correlation matrix. Different experiments were carried out on a software suite developed by us, and dramatic performance improvements were observed using the proposed methods. Lastly, as an example of real-world application, IVA with the proposed updates was used to separate vibration components from high-speed train noise data.
['Yueyue Na', 'Jian Yu', 'Bianfang Chai']
Independent vector analysis using subband and subspace nonlinearity
29,834
A new signal subspace approach for sinusoidal parameter estimation of multiple tones is proposed in this paper. Our main ideas are to arrange the observed data into a matrix without reuse of elements and exploit the principal singular vectors of this matrix for parameter estimation. Comparing with the conventional subspace methods which employ Hankel-style matrices with redundant entries, the proposed approach is more computationally efficient. Computer simulations are also included to compare the proposed methodology with the weighted least squares and ESPRIT approaches in terms of estimation accuracy and computational complexity.
['Weize Sun', 'Hing-Cheung So']
Efficient parameter estimation of multiple damped sinusoids by combining subspace and weighted least squares techniques
232,502
Cloud computing is a promising paradigm able to rationalize the use of hardware resources by means of virtualization. Virtualization allows to instantiate one or more virtual machines (VMs) on top of a single physical machine managed by a virtual machine monitor (VMM). Similarly to any other software, a VMM experiences aging and failures. Software rejuvenation is a proactive fault management technique that involves terminating an application, cleaning up the system internal state, and restarting it to prevent the occurrence of future failures. In this work, we propose a technique to model and evaluate the VMM aging process and to investigate the optimal rejuvenation policy that maximizes the VMM availability under variable workload conditions. Starting from dynamic reliability theory and adopting symbolic algebraic techniques, we investigate and compare existing time-based VMM rejuvenation policies. We also propose a time-based policy that adapts the rejuvenation timer to the VMM workload condition improving the system availability. The effectiveness of the proposed modeling technique is demonstrated through a numerical example based on a case study taken from the literature.
['Dario Bruneo', 'Salvatore Distefano', 'Francesco Longo', 'Antonio Puliafito', 'Marco Scarpa']
Workload-Based Software Rejuvenation in Cloud Systems
531,401
In this paper, we propose a new approach to entity-based information retrieval by exploiting semantic annotations of documents. With the increased availability of structured knowledge bases and semantic annotation techniques, we can capture documents and queries at their semantic level to avoid the high semantic ambiguity of terms and to bridge the language barrier between queries and documents. Based on various semantic interpretations, users can refine the queries to match their intents. By exploiting the semantics of entities and their relations in knowledge bases, we propose a novel ranking scheme to address the information needs of users.
['Lei Zhang', 'Michael Färber', 'Thanh Tran', 'Achim Rettinger']
Exploiting semantic annotations for entity-based information retrieval
794,003
['Daniel S. Soper', 'Ofir Turel']
Theory in North American Information Systems Research: A Culturomic Analysis
724,357
Resource allocation for secondary users is an important issue in cognitive radio networks. In our article, we introduce a resource allocation scheme for secondary users to share spectrum in a cognitive radio network. Secondary users can exploit the spectrum owned by primary links when the interference level does not exceed certain requirements. Uncertainties of channel gains pose a great impact on the allocation scheme. Since there are uncertainties about the channel states, we apply chance constraints to represent the interference level requirements with uncertainties. Secondary users can exceed the interference level with a predefined small probability level. Since chance constraints are generally difficult to solve and full information about the uncertain variables is not available due to the fading effects of wireless channels, we reformulate the constraints into stochastic expectation constraints. With sample average approximation method, we propose stochastic distributed learning algorithms to help secondary users satisfy the constraints with the feedback information from primary links when maximizing the utilities.
['Kenan Zhou', 'Tat M. Lok']
Resource allocation for secondary users with chance constraints based on primary links control feedback information
301,491
Semi-supervised learning is necessary for the extensive application of machine learning in practice. It can use unlabeled data to improve the performance of existing supervised machine learning method. In this work, we addressed the biomedical relation classification problem by utilizing a semi-supervised method which can train Maximum Entropy models according to Generalized Expectation criteria. In the proposed method, instead of "instance labeling" used in previous works, the "feature labeling" was applied to get the training data which can save lots of labeling time. A topic model was involved to choose the features for labeling. Experiment results show that the proposed method can dramatically improve the performance of biomedical relation classification through incorporating unlabeled data by feature labeling.
['Chengjie Sun', 'Lin Yao', 'Lei Lin', 'Xuejun Sha', 'Xiaolong Wang']
Semi-supervised biomedical relation classification using generalized expectation criteria
44,742
The exokernel operating system architecture safely gives untrusted software efficient control over hardware and software resou rces by separating management from protection. This paper describes an exokernel system that allows specialized applications to achieve high performance without sacrificing the performance of unm odified UNIX programs. It evaluates the exokernel architectur e by measuring end-to-end application performance on Xok, an exokernel for Intel x86-based computers, and by comparing Xok’s performance to the performance of two widely-used 4.4BSD UNIX systems (FreeBSD and OpenBSD). The results show that common unmodified UNIX applications can enjoy the benefits of exoker nels: applications either perform comparably on Xok/ExOS and the BSD UNIXes, or perform significantly better. In addition , the results show that customized applications can benefit subst antially from control over their resources (e.g., a factor of eight fo r a Web server). This paper also describes insights about the exokernel approach gained through building three different exokernel systems, and presents novel approaches to resource multiplexing.
['M. Frans Kaashoek', 'Dawson R. Engler', 'Gregory R. Ganger', 'Héctor M. Briceño', 'Russell Hunt', 'David Mazières', 'Thomas Pinckney', 'Robert Grimm', 'John Jannotti', 'Kenneth Mackenzie']
Application performance and flexibility on exokernel systems
421,030
Internet based volunteer computing projects such as SETI@home are currently restricted to performing coarse grained, embarrassingly parallel tasks. This is partly due to the "pull" nature of task distribution in volunteer computing environments, where workers request tasks from the master rather than the master assigning tasks to arbitrary workers. In this paper we develop algorithms for computing batches of medium grained tasks with soft deadlines in pull- style volunteer computing environments. Using assumptions about worker availability intervals based on previous studies, we develop models of unreliable workers in volunteer computing environments. These models are used to develop algorithms for task distribution in volunteer computing systems with a high probability of meeting batch deadlines. We develop algorithms for perfectly reliable workers, computation-reliable workers and unreliable workers. The effectiveness of the algorithms is demonstrated by using traces from actual execution environments.
['Eric M. Heien', 'Noriyuki Fujimoto', 'Kenichi Hagihara']
Computing low latency batches with unreliable workers in volunteer computing environments
367,325
Data Warehousing technologies and On-Line Analytical Processing (OLAP) feature a wide range of techniques for the analysis of structured data. However, these techniques are inadequate when it comes to analyzing textual data. Indeed, classical aggregation operators have earned their spurs in the online analysis of numerical data, but are unsuitable for the analysis of textual data. To alleviate this shortcoming, on-line analytical processing in text cubes requires new analysis operators adapted to textual data. In this paper, the authors propose a new aggregation operator named Text Label (TLabel), based on text categorization. Their operator aggregates textual data in several classes of documents. Each class is associated with a label that represents the semantic content of the textual data of the class. TLabel is founded on a tailoring of text mining techniques to OLAP. To validate their operator, the authors perform an experimental study and the preliminary results show the interest of their approach for Text OLAP.
['Lamia Oukid', 'Omar Boussaid', 'Nadjia Benblidia', 'Fadila Bentayeb']
TLabel: A New OLAP Aggregation Operator in Text Cubes
928,067
Protein complexes play a critical role in understanding the function of cell machinery. The existing protein complex detection algorithms are mostly cannot reflect the dynamics of protein complexes. In this paper, a novel algorithm named cuckoo search clustering algorithm (CSCA) is proposed to detect protein complexes in dynamic protein-protein interaction networks (DPIN) inspired by cuckoo search (CS) mechanism. First, we constructed dynamic protein networks and detected protein complex cores in every dynamic sub-network. Then, CS was used to cluster the protein attachments to the cores. The experimental results on DIP dataset and Krogan dataset demonstrated that CSCA is more effective to identify protein complexes than other typical methods.
['Jie Zhao', 'Xiujuan Lei', 'Fang-Xiang Wu']
Identifying protein complexes in dynamic protein-protein interaction networks based on Cuckoo Search algorithm
981,915
Due to non-ideal technology scaling, delivering a stable supply voltage is increasingly challenging. Furthermore, com- petition for limited chip interface resources (i.e., C4 pads) between power supply and I/O, and the loss of such resources to electromigration, means that constructing a power deliverynetwork (PDN) that satisfies noise margins without compromising performance is and will remain a critical problem for architects and circuit designers alike. Simple guardbanding will no longer work, as the consequent performance penalty will grow with technology scaling In this paper, we develop a pre-RTL PDN model, VoltSpot, for the purpose of studying the performance and noise tradeoffs among power supply and I/O pad allocation, the effectiveness of noise mitigation techniques, and the consequent implications of electromigration-induced PDN pad failure. Our simulations demonstrate that, despite their integral role in the PDN, power/ground pads can be aggressively reduced (by conversion into I/O pads) to their electromigration limit with minimal performance impact from extra voltage noise - provided the system implements a suitable noise-mitigation strategy. The key observation is that even though reducing power/ground pads significantly increases the number of voltage emergencies, the average noise amplitude increase is small. Overall, we can triple I/O bandwidth while maintaining target lifetimes and incurring only 1.5% slowdown
['Runjie Zhang', 'Ke Wang', 'Brett H. Meyer', 'Mircea R. Stan', 'Kevin Skadron']
Architecture implications of pads as a scarce resource
300,914
Background#R##N#This research analyzes teleconsultation from both a mechanistic and complex adaptive system (CAS) dominant logic in order to further understand the influence of dominant logic on utilization rates of teleconsultation projects. In both dominant logics, the objective of teleconsultation projects is to increase access to and quality of healthcare delivery in a cost efficient manner. A mechanistic dominant logic perceives teleconsultation as closely resembling the traditional service delivery model, while a CAS dominant logic focuses on the system’s emergent behavior of learning resulting from the relationships and interactions of participating healthcare providers.
['David L. Paul', 'Reuben R. McDaniel']
Influences on teleconsultation project utilization rates: the role of dominant logic
953,417
The Venus Operating System is an experimental multiprogramming system which supports five or six concurrent users on a small computer. The system was produced to test the effect of machine architecture on complexity of software. The system is defined by a combination of micro-programs and software. The microprogram defines a machine with some unusual architectural features; the software exploits these features to define the operating system as simply as possible. In this paper the development of the system is described, with particular emphasis on the principles which guided the design.
['Barbara Liskov']
The design of the Venus Operating System
237,913
Environmental studies form an increasingly popular application domain for machine learning and data mining techniques. In this paper we consider two applications of decision tree learning in the domain of river water quality: a) the simultaneous prediction of multiple physico-chemical properties of the water from its biological properties using a single decision tree (as opposed to learning a different tree for each different property) and b) the prediction of past physico-chemical properties of the river water from its current biological properties. We discuss some experimental results that we believe are interesting both to the application domain experts and to the machine learning community.
['Hendrik Blockeel', 'Saso Dzeroski', 'J. Grbovic']
Simultaneous Prediction of Mulriple Chemical Parameters of River Water Quality with TILDE
253,168