abstract
stringlengths
5
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
5
367
__index_level_0__
int64
1
1,000k
Use of See-Through Wearable Display as an Interface for a Humanoid Robot
['Shu Matsuura']
Use of See-Through Wearable Display as an Interface for a Humanoid Robot
855,192
Directional antennas can significantly improve the spatial reuse of a mobile ad hoc network (MANET), leading to higher network throughput. This gain comes with a substantial energy saving that results from beamforming the transmitter and/or receiver antennas in the appropriate directions. However, several medium access problems resurface when directional antennas are integrated into existing MAC protocols. We propose a power-controlled MAC protocol for directional antennas that ameliorates these problems. Our protocol allows for dynamic adjustment of the transmission power for both data and clear-to-send (CTS) packets to optimize energy consumption. It provides a mechanism for permitting interference-limited concurrent transmissions and choosing the appropriate tradeoff between throughput and energy consumption. The protocol enables nodes to implement load control in a distributed manner, whereby the total interference in the neighborhood of a receiver is upper-bounded. Simulation results demonstrate that the combined gain from concurrent transmissions using directional antennas and power control results in up to 89% saving in energy compared to a previously proposed protocol and to the CSMA/CA scheme used in the IEEE 802.11 standard. At the same time, network throughput is improved by 79% and 185%, respectively, over these protocols.
['Aman Arora', 'Marwan Krunz']
Interference-limited MAC protocol for MANETs with directional antennas
391,637
We present an approach for generating image mosaics with irregular tiles made up from patches taken from photographs, paintings and texture images. We propose a method to generate irregular tiling patterns using polygon tessellation in conjunction with a feature-based segmentation scheme, so that features in the input image can be better preserved in the generated mosaics. In order to avoid the mismatch in roughness between the sub-image in the tile region of the input image and tile images in the tile dataset that may arise in the previous RGB color based image descriptors, we introduce the concept of region entropy into the image descriptor to achieve better match in both color and roughness. A few mosaic images generated by our system are presented, some of which have effects of Chinese characteristics.
['Lina Zhang', 'Jinhui Yu']
Image Mosaics with Irregular Tiling
404,072
Traditionally, digital testing of integrated semiconductor circuits has focused on manufacturing defects. There is another class of failures that happens due to circuit marginalities. Circuit-marginality failures are on the rise due to shrinking process geometries, diminishing supply voltage, sharper signal-transition rates, and aggressive styles in circuit design. There are many different marginality issues that may render a circuit nonoperational. Capacitive cross coupling between interconnects is known to be a leading cause for marginality-related failures. In this paper, we present novel techniques to model and prioritize capacitive crosstalk faults. Experimental results are provided to show effectiveness of the proposed modeling technique on large industrial designs.
['Sandip Kundu', 'Sujit T. Zachariah', 'Yi-Shing Chang', 'Chandra Tirumurti']
On modeling crosstalk faults
463,833
Wireless technologies such as WiFi and Bluetooth are widely used for mobile data communications. However, these technologies are affected by interference resulting from shared access to unlicensed frequency bands. Visible light communication is an alternative means for data exchange over an optical channel. It has several advantages over currently used radio-based technologies, including a full-duplex channel and limited interference from other sources. We designed and implemented a system for bidirectional visible light communications between smartphones. Specifically, our solution employs dynamically changing Quick Response (QR) codes shown on the display of one mobile phone to encode the data and the front-facing camera of another phone to receive the data. We devised a supporting communication protocol for reliable communications and realized a file exchange application. Moreover, we carried out an experimental evaluation of our system with focus on communication performance and power consumption. The obtained results show that our solution is reliable and comparable with existing radio-based schemes from the user perspective.
['Maria L. Montoya Freire', 'Mario Di Francesco']
Reliable and bidirectional camera-display communications with smartphones
848,570
Wireless Local Area Networks (WLANs) based on the IEEE 802.11 standards are becoming increasingly popular in businesses, governments, educations, public and individual. However, when transmissions are broadcast over radio waves, interception and masquerading becomes trivial to anyone with a radio, and so there is a need to employ additional mechanisms to protect the communication. In this paper, we introduce the concept of a new algorithm for location based access control in secure wireless system based on IEEE 802.11 standard. In this algorithm, the location areas are defined by shared coverage of multiple wireless access point (APs) in order to limit the transmission range of the AP and using IEEE 802.11f (InterAccess Point Protocol: IAPP) to authenticate the location claims of client in the areas.
['Tanatat Saelim', 'Prawit Chumchu']
A new algorithm for location-based access control using IAPP in WLANs
203,779
In this paper, authors considered a flow shop scheduling problem with sequence dependent set-up times in an uncertain environment. Its objective function is to minimise weighted mean completion time. As for uncertainty, set-up and processing times are considered not to be deterministic. Authors propose two different approaches to deal with uncertainty of input data: robust optimisation (RO) and fuzzy optimisation. First, a deterministic mixed-integer linear programming model is presented for the general problem. Then, its robust counterpart of the proposed model is dealt with. Afterwards, the fuzzy flow shop model is developed. Moreover, a real case study on Tehran-Madar Company which is a producer of printed circuit board and OEMs is studied. Finally, a considerable discussion is held on comparison of all three approaches of namely deterministic, fuzzy and ROs based on some generated numerical examples.
['Seyed Mohammad Gholami-Zanjani', 'Mohammadmehdi Hakimifar', 'Najmesadat Nazemi', 'Fariborz Jolai']
Robust and Fuzzy Optimisation Models for a Flow shop Scheduling Problem with Sequence Dependent Setup Times: A real case study on a PCB assembly company
809,603
Over the past six years, Seattle University's Master of Software Engineering program has adopted a common community-based software engineering project as the basis for class projects in a sequence of required and elective courses. These related projects offer a unifying experience for students in the program, allow in-depth treatment of course topics on a real software project, address needs of local non-profit organizations, and better prepare the students for their professional careers through civic engagement and leadership.
['Roshanak Roshandel', 'Jeff Gilles', 'Richard LeBlanc']
Using community-based projects in software engineering education
84,701
This research shows the improvement obtained by including the principal component analysis as part of the feature production in the generation of a speech key. The main architecture includes an automatic segmentation of speech and a classifier. The first one, by using a forced alignment configuration, computes a set of primary features, obtains a phonetic acoustic model, and finds the beginnings and ends of the phones in each utterance. The primary features are then transformed according to both the phone model parameters and the phones segments per utterance. Before feeding these processed features to the classifier, the principal component analysis algorithm is applied to the data and a new set of secondary features is built. Then a support vector machine classifier generates an hyperplane that is capable to produce a phone key. Finally, by performing a phone spotting technique, the key is hardened. In this research the results for 10, 20 and 30 users are given using the YOHO database. 90% accuracy.
['Juan Arturo Nolazco-Flores', 'J. Carlos Mex-Perera', 'L. Paola García-Perera', 'Brenda Sanchez-Torres']
Using PCA to improve the generation of speech keys
863,300
The focus of this paper is dual distribution channels in business-to-business markets. We take the perspective of the distributor, and examine how different forms of competition with a manufacturer-owned channel impact distributor opportunism. Next, we consider how the same forms of competition impact the distributor's end customers. Based on a multi-industry field study of industrial distributors, we highlight the complex processes that characterize dual distribution systems. We show that while competition with a manufacturer-owned channel increases distributor opportunism, it also has the potential to benefit the distributor's end customers. In addition, although actions taken by a manufacturer to create vertical separation between channels limit competition, such actions also reduce end customer satisfaction.
['Alberto Sa Vinhas', 'Jan B. Heide']
Forms of Competition and Outcomes in Dual Distribution Channels: The Distributor's Perspective
286,142
An Equivalent Definition of Pan-Integral
['Yao Ouyang', 'Jun Li']
An Equivalent Definition of Pan-Integral
878,862
SemanticHPST is a project in which interacts ICT especially Semantic Web with history and philosophy of science and technology HPST. Main difficulties in HPST are the large diversity of sources and points of view and a large volume of data. So, HPST scholars need to use new tools devoted to digital humanities based on semantic web. To ensure a certain level of genericity, this project is initially based on three sub-projects: the first one to the port-arsenal of Brest, the second one is dedicated to the correspondence of Henri Poincare and the third one to the concept of energy. The aim of this paper is to present the project, its issues and goals and the first results and objectives in the field of harvesting distributed corpora, in advanced search in HPST corpora. Finally, we want to point out some issues about epistemological aspects about this project.
['Olivier Bruneau', 'Serge Garlatti', 'Muriel Guedj', 'Sylvain Laube', 'Jean Lieber']
SemanticHPST: Applying Semantic Web Principles and Technologies to the History and Philosophy of Science and Technology
600,634
The goal of this research is to design a fuzzy multidimensional model to manage learning object repositories. This model will provide the required elements to develop an intelligent system for information retrieval on learning object repositories based on OLAP multidimensional modeling and soft computing tools. It will handle the uncertainty of this data through a flexible approach.
['Gloria Appelgren Lara', 'Miguel Delgado', 'Nicolás Marín']
Fuzzy Multidimensional Modelling for Flexible Querying of Learning Object Repositories
578,702
The full-duplex repeater (FDR) has previously been proposed as an alternative to half-duplex operation using CSMA/CD for controlling shared access to Gigabit Ethernet. In this paper, the basic FDR architecture is described and two extensions for traffic control are introduced. Using simulation methods, the performance of the Gigabit FDR is studied under different topologies and population sizes for a range of offered load. It is shown that the FDR provides a dramatic performance improvement over CSMA/CD (using both BEB and BLAM arbitration) at high load. The Gigabit FDR is also compared with switched Ethernet in the context of medical image retrieval. It is shown that for medical image retrieval, the performance of the Gigabit FDR is much better than 100/100 or 1000/100-Mbps switched Ethernet, and equivalent to 1000/1000-Mbps switched Ethernet for low levels of non-image background traffic.
['Kenneth J. Christensen', 'Mart L. Molle', 'Sifang Li']
Comparison of the Gigabit Ethernet full-duplex repeater, CSMA/CD, and 1000/100-Mbps switched Ethernet
448,939
The gradient weighted least-squares criterion is a popular criterion for conic fitting. When the non-linear minimisation problem is solved using the eigenvector method, the minimum is not reached and the resulting solution is an approximation. In this paper, we refine the existing eigenvector method so that the minimisation of the non-linear problem becomes exactly. Consequently we apply the refined algorithm to the re-normalisation approach, by which the new iterative scheme yields to bias-corrected solution but based on the exact minimiser of the cost function. Experimental results show the improved performance of the proposed algorithm.
['Guoyu Wang', 'Zweitze Houkes', 'Bing Zheng', 'Xin Li']
A note on conic fitting by the gradient weighted least-squares estimation: refined eigenvector solution
393,556
Archaeology is emerging as one of the key areas of applications for laser range imaging. This particular context imposes a number of specific constraints on the design and operations of range sensors. In this paper, we discuss some of the issues in designing and using laser range sensor systems for archaeology. Results obtained on remote archaeological sites will serve to illustrate these considerations.
['Guy Godin', 'Francois Blais', 'Luc Cournoyer', 'J.-Angelo Beraldin', 'Jacques Domey', 'John Taylor', 'Marc Rioux', 'Sabry F. El-Hakim']
Laser range imaging in archaeology: issues and results
154,519
Content addressable memories (CAMs) have significantly lower capacities than RAMs. Following a summary of large-capacity CAM applications and a brief tutorial look at CAM operation, this paper reviews the sources of this capacity disadvantage: comparator area overhead and difficulty implementing two-dimensional decoding. Past attempts at achieving higher CAM density and capacity are reviewed, and advantages and disadvantages of each are discussed qualitatively. Architectures are divided into the broad classes of serial and fully parallel. The former include bit-serial, Orthogonal-RAM-based, insertion-memory, word-serial, multiport, vector-centered, pattern-addressable memory, and systolic associative memory. The latter include standard architectures, post-encoding, and pre-classification. A taxonomy, providing the first structured comparison of existing techniques, is presented. Thereafter, four architectures (two serial and two fully parallel) are quantitatively analyzed, in terms of delay, area, and power, and the cost-performance measures area x delay and power x delay. The fully-parallel architectures, despite their high cost, produce superior cost-performance results.
['K.J. Schultz', 'P. Glenn Gulak']
Architectures for large-capacity CAMs
323,043
Agile teams are described as "self-organizing". How these teams actually organize themselves in practice, however, is not well understood. Through Grounded Theory research involving 24 Agile practitioners across 14 software organizations in New Zealand and India, we identified six informal roles that team members adopt in order to help their teams self-organize. These roles --- Mentor, Co-ordinator, Translator, Champion, Promoter, and Terminator --- help teams learn Agile practices, liaise with customers, maintain management support, and remove ineffective team members. Understanding these roles will help software teams become self-organizing, and should guide Agile coaches in working with Agile teams.
['Rashina Hoda', 'James Noble', 'Stuart Marshall']
Organizing self-organizing teams
486,257
Background#R##N#High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This “large p, small n” paradigm in the area of biomedical “big data” may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets.
['Ruiquan Ge', 'Manli Zhou', 'Youxi Luo', 'Qinghan Meng', 'Guoqin Mai', 'Dongli Ma', 'Guoqing Wang', 'Fengfeng Zhou']
McTwo: a two-step feature selection algorithm based on maximal information coefficient
693,643
This paper presents a method to classify social media users based on their socioeconomic status. Our experiments are conducted on a curated set of Twitter profiles, where each user is represented by the posted text, topics of discussion, interactive behaviour and estimated impact on the microblogging platform. Initially, we formulate a 3-way classification task, where users are classified as having an upper, middle or lower socioeconomic status. A nonlinear, generative learning approach using a composite Gaussian Process kernel provides significantly better classification accuracy (75 %) than a competitive linear alternative. By turning this task into a binary classification - upper vs. medium and lower class - the proposed classifier reaches an accuracy of 82 %.
['Vasileios Lampos', 'Nikolaos Aletras', 'Jk Geyti', 'Bin Zou', 'Ingemar J. Cox']
Inferring the Socioeconomic Status of Social Media Users Based on Behaviour and Language
707,668
In this work, we address the problem of cross-modal retrieval in presence of multi-label annotations. In particular, we introduce multi-label Canonical Correlation Analysis (ml-CCA), an extension of CCA, for learning shared subspaces taking into account high level semantic information in the form of multi-label annotations. Unlike CCA, ml-CCA does not rely on explicit pairing between modalities, instead it uses the multi-label information to establish correspondences. This results in a discriminative subspace which is better suited for cross-modal retrieval tasks. We also present Fast ml-CCA, a computationally efficient version of ml-CCA, which is able to handle large scale datasets. We show the efficacy of our approach by conducting extensive cross-modal retrieval experiments on three standard benchmark datasets. The results show that the proposed approach achieves state of the art retrieval performance on the three datasets.
['Viresh Ranjan', 'Nikhil Rasiwasia', 'C. V. Jawahar']
Multi-label Cross-Modal Retrieval
581,828
Towards a Convex HMM Surrogate for Word Alignment.
['Andrei Simion', 'Michael J. Collins', 'Cliff Stein']
Towards a Convex HMM Surrogate for Word Alignment.
963,266
Counterexamples for Expected Rewards
['Tim Quatmann', 'Nils Jansen', 'Christian Dehnert', 'Ralf Wimmer', 'Erika Ábrahám', 'Joost-Pieter Katoen', 'Bernd Becker']
Counterexamples for Expected Rewards
669,863
In memoriam: Herbert J. Ryser 1923-1985.
['Richard A. Brualdi']
In memoriam: Herbert J. Ryser 1923-1985.
792,883
This paper presents a computer-supported approach for providing 'enhanced' discovery learning in informal settings like museums. It is grounded on a combination of gesture-based interactions and artwork-embedded AIED paradigms, and is implemented through a distributed architecture.
['Emmanuel G. Blanchard', 'Alin Nicolae Zanciu', 'Haydar Mahmoud', 'James S. Molloy']
Enhancing In-Museum Informal Learning by Augmenting Artworks with Gesture Interactions and AIED Paradigms
642,436
Finite State Controllers (FSCs) are an effective way#R##N#to represent sequential plans compactly. By imposing#R##N#appropriate conditions on transitions, FSCs#R##N#can also represent generalized plans that solve a#R##N#range of planning problems from a given domain.#R##N#In this paper we introduce the concept of hierarchical#R##N#FSCs for planning by allowing controllers#R##N#to call other controllers. We show that hierarchical#R##N#FSCs can represent generalized plans more compactly#R##N#than individual FSCs. Moreover, our call#R##N#mechanism makes it possible to generate hierarchical#R##N#FSCs in a modular fashion, or even to apply#R##N#recursion. We also introduce a compilation that#R##N#enables a classical planner to generate hierarchical#R##N#FSCs that solve challenging generalized planning#R##N#problems. The compilation takes as input a set of#R##N#planning problems from a given domain and outputs#R##N#a single classical planning problem, whose solution#R##N#corresponds to a hierarchical FSC.
['Javier Segovia-Aguas', 'Sergio Jiménez', 'Anders Jonsson']
Hierarchical finite state controllers for generalized planning
839,568
In this paper, closed-form discrete Gaussian functions are proposed. The first property of these functions is that their discrete Fourier transforms are still discrete Gaussian functions with different index parameter. This index parameter, which is an analogy to the variance in continuous Gaussian functions, controls the width of the function shape. Second, the discrete Gaussian functions are positive and bell-shaped. More important, they also have finite support and consecutive zeros. Thus, they satisfy Tao’s and Donoho’s uncertainty principle of discrete signal. The construction of these discrete Gaussian functions is inspired by Kong’s zeroth-order discrete Hermite Gaussian functions. Three examples are discussed.
['Soo-Chang Pei', 'Kuo-Wei Chang']
Optimal Discrete Gaussian Function: The Closed-Form Functions Satisfying Tao’s and Donoho’s Uncertainty Principle With Nyquist Bandwidth
710,250
This paper presents an original solution to the camera control problem in a virtual environment. Our objective is to present a general framework that allows the automatic control of a camera in a dynamic environment. The proposed method is based on the image-based controlor visual servoing approach. It consists in positioning a camera according to the information perceived in the image. This is thus a very intuitive approach of animation. To be able to react automatically to modifications of the environment, we also considered the introduction of constraints into the control. This approach is thus adapted to highly reactive contexts (virtual reality, video games). Numerous examples dealing with classic problems in animation are considered within this framework and presented in this paper.
['Eric Marchand', 'Nicolas Courty']
Image-Based Virtual Camera Motion Strategies
282,241
This paper presents a Web adaptation and personalization architecture that uses cognitive aspects as its core filtering element. The innovation of the proposed architecture focuses upon the creation of a comprehensive user profiling that combines parameters that analyze the most intrinsic users' characteristics like visual, cognitive, and emotional processing parameters as well as the "traditional" user profiling characteristics and together tend to give the most optimized adapted and personalized result to the user.
['Panagiotis Germanakos', 'Nikos Tsianos', 'Zacharias Lekkas', 'Constantinos Mourlas', 'Marios Belk', 'George Samaras']
Embracing Cognitive Aspects in Web Personalization Environments -- The AdaptiveWeb Architecture
33,562
We propose a stochastic iteration approach to signal set design. Four practical stochastic iterative algorithms are proposed with respect to equal and average energy constraints and sequential and batch modes. By simulation, a new optimal signal set, the L2 signal set (consisting of a regular simplex set of three signals and some zero signals), is found under the strong simplex conjecture (SSC) condition (equal a priori probability and average energy constraint) at low signal-to-noise ratios (SNR). The optimality of the L1 signal set, the confirmation of the weak simplex conjecture, and two of Dunbridge's (1967) theorems are some of the results obtained by simulations. The influence of SNR and a priori probabilities on the signal sets is also investigated via simulation. As an application to practical communication system design, the signal sets of eight two-dimensional (2-D) signals are studied by simulation under the SSC condition. Two signal sets better than 8-PSK are found. Optimal properties of the L2 signal set are analyzed under the SSC condition at low SNRs. The L2 signal set is proved to be uniquely optimal in 2-D space. The class of signal sets S(M, K) (consisting of a regular simplex set of K signals and M-K zero signals) is analyzed. It is shown that any of the signal sets S(M, K) for 3/spl les/K/spl les/M-1 disproves the strong simplex conjecture for M/spl ges/4, and if M/spl ges/7, S(M, 2) (the L1 signal set) also disproves the strong simplex conjecture. It is proved that the L2 signal set is the unique optimal signal set in the class of signal sets S(M, K) for all M/spl ges/4. Several results obtained by Steiner (see ibid., vol.40, no.5, p.721-31, 1994) for all M/spl ges/7 are extended to all M/spl ges/4. Finally, we show that for M/spl ges/7, there exists an integer K'<M such that any of the signal sets consisting of K signals equally spaced on a circle and M-K zero signals, for 4/spl les/K/spl les/K', also disprove the strong simplex conjecture.
['Yi Sun']
Stochastic iterative algorithms for signal set design for Gaussian channels and optimality of the L2 signal set
96,835
Abstract Objective Models of healthcare organizations (HCOs) are often defined up front by a select few administrative officials and managers. However, given the size and complexity of modern healthcare systems, this practice does not scale easily. The goal of this work is to investigate the extent to which organizational relationships can be automatically learned from utilization patterns of electronic health record (EHR) systems. Method We designed an online survey to solicit the perspectives of employees of a large academic medical center. We surveyed employees from two administrative areas: (1) Coding & Charge Entry and (2) Medical Information Services and two clinical areas: (3) Anesthesiology and (4) Psychiatry. To test our hypotheses we selected two administrative units that have work-related responsibilities with electronic records; however, for the clinical areas we selected two disciplines with very different patient responsibilities and whose accesses and people who accessed were similar. We provided each group of employees with questions regarding the chance of interaction between areas in the medical center in the form of association rules (e.g., Given someone from Coding & Charge Entry accessed a patient's record, what is the chance that someone from Medical Information Services access the same record?). We compared the respondent predictions with the rules learned from actual EHR utilization using linear-mixed effects regression models. Results The findings from our survey confirm that medical center employees can distinguish between association rules of high and non-high likelihood when their own area is involved. Moreover, they can make such distinctions between for any HCO area in this survey. It was further observed that, with respect to highly likely interactions, respondents from certain areas were significantly better than other respondents at making such distinctions and certain areas' associations were more distinguishable than others. Conclusions These results illustrate that EHR utilization patterns may be consistent with the expectations of HCO employees. Our findings show that certain areas in the HCO are easier than others for employees to assess, which suggests that automated learning strategies may yield more accurate models of healthcare organizations than those based on the perspectives of a select few individuals.
['You Chen', 'Nancy M. Lorenzi', 'Steve Nyemba', 'Jonathan S. Schildcrout', 'Bradley Malin']
We work with them? Healthcare workers interpretation of organizational relations mined from electronic health records
291,226
Abstract The junction temperature of the insulated-gate bipolar transistor (IGBT) module, which belongs to power semiconductor devices, directly impacts on the system performance of the power conversion system (PCS), and therefore, the accurate prediction of the airflow rate passing the heat sink block of the IGBT module is very important at the thermal design stage. In this paper, the thermo-fluid simulation was developed with the T–Q characteristic curve to predict the junction temperature of the IGBT module and the airflow rate of the heat sink block. The porous media model was adopted in the heat sink block with fins and the filled air between fins of the heat sink block in the PCS to remove the heavily concentrated mesh problems in the heat sink block. The proposed simulation model was compared to the experimental value for the hot spot temperature on the heat sink block and the differences were within the average 4.0% margin of error in the comparison. This simulation model can be used to evaluate the suitability of the cooling design according to various operating conditions of the fan and IGBT module with benefits of the reduction in the mesh generation and the computation time. Also, this simulation model increases the flexibility of predicting the airflow rates in the PCS due to the change of the airflow passage structure in the PCS or the capacity of the fan.
['Chang-Woo Han', 'Seung-Boong Jeong', 'Myung-Do Oh']
Thermo-fluid simulation for the thermal design of the IGBT module in the power conversion system
645,214
In this paper, a managed object (MO) model is proposed for the management of ATM layer networks and integration with its server (SONET) and client (service access) layer networks. The proposed management model is based on the application of emerging "network-view" draft standards from ITU, ETSI and the ATM Forum, to the technology specific areas of ATM and SONET. Our main focus is on the ATM layer-networks needed to support management of a network of TMN Q3 (CMIP) based ATM switches, such as Fujitsu FETEX-150 broadband switching systems, however we also apply the same modeling principles to the SONET and service access layers to illustrate how the network view concepts apply to integrated network management. Design goals for the model include a MIB (management information base) that is complimentary to the network element (NE) agent model, and can efficiently support user applications.
['Buket Aydemir', 'Joe Tanzini']
An ATM network-view model for integrated layer-network management
391,663
Smart meters are now being aggressively deployed worldwide, with tens of millions of meters in use today and hundreds of millions more to be deployed in the next few years. These low-cost (≃$50) embedded devices have not fared well under security analysis: experience has shown that the majority of current devices that have come under scrutiny can be exploited by unsophisticated attackers. The potential for large-scale attacks that target a single or a few vulnerabilities is thus very real. In this paper, we consider how diversity techniques can limit large-scale attacks on smart meters. We show how current meter designs do not possess the architectural features needed to support existing diversity approaches such as address space randomization. In response, we posit a new return address encryption technique suited to the computationally and resource limited smart meters. We conclude by considering analytically the effect of diversity on an attacker wishing to launch a large-scale attack, showing how a lightweight diversity scheme can force the time needed for a large compromise into the scale of years.
['Stephen E. McLaughlin', 'Dmitry Podkuiko', 'Adam Delozier', 'Sergei Miadzvezhanka', 'Patrick D. McDaniel']
Embedded firmware diversity for smart electric meters
763,128
In the Sesame framework, we develop a modeling and simulation environment for the efficient design space exploration of heterogeneous embedded systems. Since Sesame recognizes separate application and architecture models within a single system simulation, it needs an explicit mapping step to relate these models for co-simulation. So far in Sesame, the mapping decision has been assumed to be made by an experienced designer, intuitively. However, this assumption is increasingly becoming inappropriate for the following reasons: already the realistic systems are far too complex for making intuitive decisions at an early design stage where the design space is very large. Likely, these systems will even get more complex in the near future. Besides, there exist multiple criteria to consider, like processing times, power consumption and cost of the architecture, which make the decision problem even harder.In this paper, the mapping decision problem is formulated as a multiobjective combinatorial optimization problem. For a solution approach, an optimization software tool, implementing an evolutionary algorithm from the literature, has been developed to achieve a set of best alternative mapping decisions under multiple criteria. In a case study, we have used our optimization tool to obtain a set of mapping decisions, some of which were further evaluated by the Sesame simulation framework.
['Cagkan Erbas', 'Selin Cerav Erbas', 'Andy D. Pimentel']
A multiobjective optimization model for exploring multiprocessor mappings of process networks
142,968
In this paper, we propose a block-based histogram of optical flow (BHOF) to generate hand representation in sign language recognition. Optical flow of the sign language video is computed in a region centered around the location of the detected hand position. The hand patches of optical flow are segmented into M spatial blocks, where each block is a cuboid of a segment of a frame across the entire sign gesture video. The histogram of each block is then computed and normalized by its sum. The feature vector of all blocks are then concatenated as the BHOF sign gesture representation. The proposed method provides a compact scale-invariant representation of the sign language. Furthermore, block-based histogram encodes spatial information and provides local translation invariance in the extracted optical flow. Additionally, the proposed BHOF also introduces sign language length invariancy into its representation, and thereby, produce promising recognition rate in signer independent problems.
['Kian Ming Lim', 'Alan W. C. Tan', 'Shing Chiang Tan']
Block-based histogram of optical flow for isolated sign language recognition ☆
846,547
We consider the problem of multiple-target estimation using a colocated multiple-input multiple-output (MIMO) radar system. We employ sparse modeling to estimate the unknown target parameters (delay, Doppler) using a MIMO radar system that transmits frequency-hopping waveforms. We formulate the measurement model using a block sparse representation. We adaptively design the transmit waveform parameters (frequencies, amplitudes) to improve the estimation performance. Firstly, we derive analytical expressions for the correlations between the different blocks of columns of the sensing matrix. Using these expressions, we compute the block coherence measure of the dictionary. We use this measure to optimally design the sensing matrix by selecting the hopping frequencies for all the transmitters. Secondly, we adaptively design the amplitudes of the transmitted waveforms during each hopping interval to improve the estimation performance. To perform this amplitude design, we initialize it by transmitting constant-modulus waveforms of the selected frequencies to estimate the radar cross section (RCS) values of all the targets. Next, we make use of these RCS estimates to optimally select the waveform amplitudes. We demonstrate the performance improvement due to the optimal design of waveform parameters using numerical simulations. Further, we employ compressive sensing to conduct accurate estimation from far fewer samples than the Nyquist rate.
['Sandeep Gogineni', 'Arye Nehorai']
Frequency-Hopping Code Design for MIMO Radar Estimation Using Sparse Modeling
332,471
Inter-subject variability plays an important role in the performance of facial expression recognition. Therefore, several methods have been developed to bring the performance of a person-independent system closer to that of a person-dependent one. These techniques need different samples from a new person to increase the generalization ability. We have proposed a new approach to address this problem. It employs the person’s neutral samples as prior knowledge and a synthesis method based on the subspace learning to generate virtual expression samples. These samples have been incorporated in learning process to learn the style of the new person. We have also enriched the training data set by virtual samples created for each person in this set. Compared with previous studies, the results showed that our approach can perform the task of facial expression recognition effectively with better robustness for corrupted data.
['Amin Mohammadian', 'Hassan Aghaeinia', 'Farzad Towhidkhah']
Incorporating prior knowledge from the new person into recognition of facial expression
21,259
Specialty Task Force: A Strategic Component to Electronic Health Record (EHR) Optimization.
['Mary Rachel Romero', 'Allison Staub']
Specialty Task Force: A Strategic Component to Electronic Health Record (EHR) Optimization.
833,184
Visualization aims at providing insight to its users. Now and then I am a user myself, and use visualization trying to solve a puzzle and to satisfy my curiosity. Simple questions turn out to be challenging problems, leading to a personal quest for their solution and resulting in intriguing images and animations. In my presentation I will present three such puzzles, all in the area of mathematical visualization.
['Jarke J. van Wijk']
Keynote address Knots, maps, and tiles: Three visual puzzles
505,856
Background#R##N#Frameshift translation is an important phenomenon that contributes to the appearance of novel coding DNA sequences (CDS) and functions in gene evolution, by allowing alternative amino acid translations of gene coding regions. Frameshift translations can be identified by aligning two CDS, from a same gene or from homologous genes, while accounting for their codon structure. Two main classes of algorithms have been proposed to solve the problem of aligning CDS, either by amino acid sequence alignment back-translation, or by simultaneously accounting for the nucleotide and amino acid levels. The former does not allow to account for frameshift translations and up to now, the latter exclusively accounts for frameshift translation initiation, not considering the length of the translation disruption caused by a frameshift.
['S. Jammali', 'Esaie Kuitche', 'Ayoub Rachati', 'F. Belanger', 'Michelle S. Scott', 'Aïda Ouangraoua']
Aligning coding sequences with frameshift extension penalties
924,215
For integers a and b we define the Shanks chain p 1 , p 2 ,... p k of length k to be a sequence of k primes such that p i+1 = ap i 2 - b for i = 1,2,…,k - 1. While for Cunningham chains it is conjectured that arbitrarily long chains exist, this is, in general, not true for Shanks chains. In fact, with s = ab we show that for all but 56 values of s ≤ 1000 any corresponding Shanks chain must have bounded length. For this, we study certain properties of functional digraphs of quadratic functions over prime fields, both in theory and practice. We give efficient algorithms to investigate these properties and present a selection of our experimental results.
['Edlyn Teske', 'Hugh C. Williams']
A Note on Shanks's Chains of Primes
151,694
At the port-of-entry, containers are inspected through a specific sequence of sensor stations to detect the presence of radioactive materials, biological and chemical agents, and other illegal cargo. The inspection policy, which includes the sequence in which sensors are applied and the threshold levels used at the inspection stations, affects the probability of misclassifying a container as well as the cost and time spent in inspection. This work is an extension of a paper by Elsayed et al., which considers an inspection system operating with a Boolean decision function combining station results. In this paper, we present a multiobjective optimization approach to determine the optimal sensor arrangement and threshold levels, while considering cost and time. The total cost includes cost incurred by misclassification errors and the total expected cost of inspection, while the time represents the total expected time a container spends in the inspection system. Examples which apply the approach in various systems are presented.
['Christina M. Young', 'Mingyu Li', 'Y. Zhu', 'Minge Xie', 'Elsayed A. Elsayed', 'Tsvetan Asamov']
Multiobjective Optimization of a Port-of-Entry Inspection Policy
324,365
BORM (Business Object Relationship Modeling) is a development methodology used to store knowledge of process-based business systems. BORM is based on the combination of object-oriented approach and process- based modelling. Especially, in the case of agricultural, food and environmental enterprise informational systems we need to work with new flexible modelling tools, because processes and data are instantly changing and modifying through the whole life cycle of such systems. In this paper, we present BORM use as a tool for capturing process information required in the initial phases of information system development.
['Martin Molhanec', 'Vojtech Merunka', 'Iveta Merunková']
Business Knowledge Modelling Using the BORM Method
673,848
Evolving images using genetic programming is a complex task and the representation of the solutions has an important impact on the performance of the system. In this paper, we present two novel representations for evolving images with genetic programming. Both these representations are based on the idea of recursively partitioning the space of an image. This idea distinguishes these representations from the ones that are currently most used in the literature. The first representation that we introduce partitions the space using rectangles, while the second one partitions using triangles. These two representations are compared to one of the most well known and frequently used expression-based representations, on five different test cases. The presented results clearly indicate the appropriateness of the proposed representations for evolving images. Also, we give experimental evidence of the fact that the proposed representations have a higher locality compared to the compared expression-based representation.
['Alessandro Re', 'Mauro Castelli', 'Leonardo Vanneschi']
A Comparison Between Representations for Evolving Images
723,167
Abstract. This short paper is a response to the article by McGrath in this issue which argues that information systems (IS) researchers need to be more explicit about ‘being critical’. I accept her point, and I use this paper to offer a sketch of my personal journey in learning about criticality, and some thoughts from where I am now on various aspects of carrying out critical IS research.
['Geoff Walsham']
Learning about being critical
502,935
In this paper, we introduce a cheat-free path discovering process for peer-to-peer online games. The algorithm finds the requested path through the active participation of the users, but cheating is detected through a controller. The controller recalculates a path segment when two peers disagree in terms of cost, and identifies the cheater using the trust profile. This eventually lowers the computational cost. There is no false positive while identifying a cheater as the recalculation is performed by the controller.
['Dewan Tanvir Ahmed', 'Shervin Shirmohammadi']
An algorithm for measurement and detection of path cheating in virtual environments
13,390
Summary: We present a method, LineageProgram, that uses the developmental lineage relationship of observed gene expression measurements to improve the learning of developmentally relevant cellular states and expression programs. We find that incorporating lineage information allows us to significantly improve both the predictive power and interpretability of expression programs that are derived from expression measurements from in vitro differentiation experiments. The lineage tree of a differentiation experiment is a tree graph whose nodes describe all of the unique expression states in the input expression measurements, and edges describe the experimental perturbations applied to cells. Our method, LineageProgram, is based on a log-linear model with parameters that reflect changes along the lineage tree. Regularization with L1 that based methods controls the parameters in three distinct ways: the number of genes change between two cellular states, the number of unique cellular states, and the number of underlying factors responsible for changes in cell state. The model is estimated with proximal operators to quickly discover a small number of key cell states and gene sets. Comparisons with existing factorization, techniques, such as singular value decomposition and non-negative matrix factorization show that our method provides higher predictive power in held, out tests while inducing sparse and biologically relevant gene sets.#R##N##R##N#Contact: gifford@mit.edu
['Tatsunori Hashimoto', 'Tommi S. Jaakkola', 'Richard I. Sherwood', 'Esteban O. Mazzoni', 'Hynek Wichterle', 'David K. Gifford']
Lineage-based identification of cellular states and expression programs
103,172
Airborne Wind Energy (AWE) concerns systems capable harvesting energy from the wind, offering an efficient alternative to traditional wind turbines by flying crosswind with a tethered airfoil. Such concepts involve a system more difficult to control than conventional wind turbines. These systems generally cannot be operated efficiently in very low wind conditions, necessitating intervention by launching and landing. In contrast to this approach, this paper proposes to continue flying holding patterns which minimize power consumption. Efficient holding patterns are determined by solving an optimal control problem. The model is specified as a set of differential algebraic equations and an approximation of the tether drag is taken into account. Finally, an evaluation in terms of energy is performed by means of statistical approach.
['G. Licitra', 'S. Sieberling', 'S. Engelen', 'P. Williams', 'Richard Ruiterkamp', 'Moritz Diehl']
Optimal control for minimizing power consumption during holding patterns for airborne wind energy pumping system
978,352
A self-adaptive classifier for efficient text-stream processing is proposed. The proposed classifier adaptively speeds up its classification while processing a given text stream for various NLP tasks. The key idea behind the classifier is to reuse results for past classification problems to solve forthcoming classification problems. A set of classification problems commonly seen in a text stream is stored to reuse the classification results, while the set size is controlled by removing the least-frequently-used or least-recently-used classification problems. Experimental results with Twitter streams confirmed that the proposed classifier applied to a state-of-the-art base-phrase chunker and dependency parser speeds up its classification by factors of 3.2 and 5.7, respectively.
['Naoki Yoshinaga', 'Masaru Kitsuregawa']
A Self-adaptive Classifier for Efficient Text-stream Processing
615,508
The paper presents computer simulations of an algorithm for non-conflict packet commutation in a crossbar switch node. In particular, we study a version of the MiMa-algorithm with a new selection of the initial element of the traffic matrix as compared to the original version of the algorithm. Our simulations utilize independent, identically distributed (i.i.d.) Bernoulli uniform load packet traffic. The obtained results indicate that the original MiMa-algorithm yields better results with respect to the throughput of the crossbar switch in comparison to the modified version.
['Tasho D. Tashev', 'Vladimir V. Monov']
Computer simulations of a modified MiMa-algorithm for a crossbar packet switch
55,678
The problem of fault-tolerant attitude tracking control for an over-actuated spacecraft in the presence of actuator faults/failures and external disturbances is addressed in this paper. Assuming that information on the inertia and bounds on the disturbances are unknown, a novel fault-tolerant control (FTC) law incorporating on-line control allocation (CA) is developed to handle actuator faults/failures. To improve the robustness of the adaptive law and stop the adaptive gain from increasing, the time-varying dead-zone modification technique is employed in parameter adaptations. It is shown that uniform ultimate boundedness of the tracking errors can be ensured. To illustrate the efficiency of the CA-based FTC strategy, numerical simulations are carried out for a rigid spacecraft under actuator faults and failures.
['Qiang Shen', 'Danwei Wang', 'Senqiang Zhu', 'Eng Kee Poh']
Inertia-free fault-tolerant spacecraft attitude tracking using control allocation
556,138
This paper presents the high-level architecture (HLA) of the research project DEWI (dependable embedded wireless infrastructure). The objective of this HLA is to serve as a reference for the development of industrial wireless sensor and actuator networks (WSANs) based on the concept of the DEWI Bubble. The DEWI Bubble is defined here as a high-level abstraction of an industrial WSAN with enhanced interoperability (via standardized interfaces), technology reusability, and cross-domain development. This paper details the design criteria used to define the HLA and the organization of the infrastructure internal and external to the DEWI Bubble. The description includes the different perspectives, models or views of the architecture: the entity model, the layered model, and the functional view model (including an overview of interfaces). The HLA constitutes an extension of the ISO/IEC SNRA (sensor network reference architecture) towards the support of industrial applications. To improve interoperability with existing approaches the DEWI HLA also reuses some features from other standardized technologies and architectures. The HLA will allow networks with different industrial sensor technologies to exchange information between them or with external clients via standard interfaces, thus providing a consolidated access to sensor information of different domains. This is an important aspect for smart city applications, Big Data and internet-of-things (IoT).
['Ramiro Samano-Robles', 'Tomas Nordström', 'Salvador Santonja', 'Werner Rom', 'Eduardo Tovar']
The DEWI high-level architecture: Wireless sensor networks in industrial applications
908,341
P2P-VoD systems have gained tremendous popularity in recent years. While existing research is mostly based on theoretical or conventional assumptions, it is particularly valuable to understand and examine how these assumptions work in realistic environments, so as to set up a solid foundation for mechanism design and optimization possibilities. In this paper, we present a comprehensive measurement study of CoolFish, a real-world P2P-VoD system. Our measurement provides several new findings which are different from the traditional assumptions or observations: the access pattern does not match Poisson distribution; session time does not have positive correlation with movie popularity; jump frequency does not have a negative correlation with movie popularity as assumed in previous studies. We analyze the reasons for these results and provide suggestions for the further study of P2P-VoD services.
['Tieying Zhang', 'Zhenhua Li', 'Hua-Wei Shen', 'Yan Huang', 'Xueqi Cheng']
A White-Box Empirical Study of P2P-VoD Systems: Several Unconventional New Findings
135,013
Motivation: Genome-wide gene expression programs have been monitored and analyzed in the yeast Saccharomyces cerevisiae, but how cells regulate global gene expression programs in response to environmental changes is still far from being understood. We present a systematic approach to quantitatively characterize the transcriptional regulatory network of the yeast cell cycle. For the interpretative purpose, 20 target genes were selected because their expression patterns fluctuated in a periodic manner concurrent with the cell cycle and peaked at different phases. In addition to the most significant five possible regulators of each specific target gene, the expression pattern of each target gene affected by synergy of the regulators during the cell cycle was characterized. Our first step includes modeling the dynamics of gene expression and extracting the transcription rate from a time-course microarray data. The second step embraces finding the regulators that possess a high correlation with the transcription rate of the target gene, and quantifying the regulatory abilities of the identified regulators.#R##N##R##N#Results: Our network discerns not only the role of the activator or repressor for each specific regulator, but also the regulatory ability of the regulator to the transcription rate of the target gene. The highly coordinated regulatory network has identified a group of significant regulators responsible for the gene expression program through the cell cycle progress. This approach may be useful for computing the regulatory ability of the transcriptional regulatory networks in more diverse conditions and in more complex eukaryotes.#R##N##R##N#Supplementary information: Matlab code and test data are available at http://www.ee.nthu.edu.tw/~bschen/quantitative/regulatory_network.htm
['Hong‐Chu Chen', 'Hsiao-Ching Lee', 'Tsai-Yun Lin', 'Wen-Hsiung Li', 'Bor-Sen Chen']
Quantitative characterization of the transcriptional regulatory network in the yeast cell cycle
211,887
Linear equations with large spare coefficient matrices arise in many practical scientific and engineering problems. Previous sparse matrix algorithms for solving linear equations based on single-core CPU are highly complex and time-consuming. To solve such problems, aiming at Jacobi iteration algorithm, in this paper we firstly implement a sparse matrix parallel iteration algorithm on a hybrid multi-core parallel system consisting of CPU and GPU, then an optimization scheme is proposed to carry out performance improvement in two ways, i.e., the multi-level storage structure and the memory access mode of CUDA. Experimental results show that the parallel algorithm on hybrid multi-core system can gain higher performance than the original linear Jacobi iteration algorithm on CPU. In addition, the optimization scheme is effective and feasible.
['Dongxu Yan', 'Haijun Cao', 'Xiaoshe Dong', 'Bao Zhang', 'Xingjun Zhang']
Optimizing Algorithm of Sparse Linear Systems on GPU
370,175
Denote a point in the plane by z=(z,y) and a polynomial of nth degree in z by f(z) /spl Sigma//sub i,j//spl ges/o/sub 1/i+j/spl les/n(a/sub ij/x/sup i/y/sup j/). Denote by Z(f) the set of points for which f(z)=0. Z(f) is the 2D curve represented by f(z). In this paper, we present a new approach to fitting 2D curves to data in the plane (or 3D surfaces to range data) which has significant advantages over presently known methods. It requires considerably less computation and the resulting curve can be forced to lie close to the data set at prescribed points provided that there is an nth degree polynomial that can reasonably approximate the data. Linear programming is used to do the fitting. The approach can incorporate a variety of distance measures and global geometric constraints.
['Zhibin Lei', 'David B. Cooper']
New, faster, more controlled fitting of implicit polynomial 2D curves and 3D surfaces to data
347,602
Applicative functors define an interface to computation that is more general, and correspondingly weaker, than that of monads. First used in parser libraries, they are now seeing a wide range of applications. This paper sets out to explore the space of non-monadic applicative functors useful in programming. We work with a generalization, lax monoidal functors, and consider several methods of constructing useful functors of this type, just as transformers are used to construct computational monads. For example, coends, familiar to functional programmers as existential types, yield a range of useful applicative functors, including left Kan extensions. Other constructions are final fixed points, a limited sum construction, and a generalization of the semi-direct product of monoids. Implementations in Haskell are included where possible.
['Ross Paterson']
Constructing applicative functors
72,187
In the era of big data, companies are collecting and analyzing massive amount of data to help making business decisions. The focus of Business intelligence has been moved from reporting and performance monitoring to ad-hoc analysis, data exploration and knowledge self-discovery, where the user's train of thought is important. Business intelligence system must provide real time multidimensional analytic ability over big data volumes in order to meet the demands.#R##N##R##N#Relational OLAP technologies provide better support for user-driven analysis over big volumes of dynamic data. In-memory OLAP technologies enable real time analytics experience. Their combination is the new trend to provide real time multidimensional analytics over big data volumes.#R##N##R##N#However, Relational OLAP and in-memory OLAP have their own shortcomings and challenges. Relational OLAP could cause non-optimal relational database access. It often has intensive I/O and CPU demands. The biggest challenge of In-memory OLAP is combinatorial explosion. Transferring huge amount of data into multi-dimensional cache (cube) not only very time consuming but also takes considerable amount of resources.#R##N##R##N#Increasing hardware resources, employing distributed in-memory data store, or re-designing MDX engine used by ROLAP to adopt hard to implement algorisms, e.g. parallel computation, are typically ways to overcome the above challenges. Instead of those costly approaches, this paper discusses several techniques that provide a way to improve performance and scalability without piling up hardware resources or going through major re-architecture. These techniques have been implemented in IBM Cognos Business Analytics (BA) solution and have been bringing success to customers since then.
['Larry Luo', 'Martin Petitclerc']
Relational OLAP query optimization
642,475
In this paper, we study new aspects of the integral contraint regularization of state-constrained elliptic control problems (Jadamba et al. in Syst Control Lett 61(6):707–713, 2012). Besides giving new results on the regularity and the convergence of the regularized controls and associated Lagrange multipliers, the main objective of this paper is to give abstract error estimates for the regularization error. We also consider a discretization of the regularized problems and derive numerical estimates which are uniform with respect to the regularization parameter and the discretization parameter. As an application of these results, we prove that this discretization is indeed a full discretization of the original problem defined in terms of a problem with finitely many integral constraints. Detailed numerical results justifying the theoretical findings as well as a comparison of our work with the existing literature is also given.
['Baasansuren Jadamba', 'Akhtar A. Khan', 'Miguel Sama']
Error estimates for integral constraint regularization of state-constrained elliptic control problems
936,539
Many smartphone operating systems implement strong sandboxing for 3rd party application software. As part of this sandboxing, they feature a permission system, which conveys to users what sensitive resources an application will access and allows users to grant or deny permission to access those resources. In this paper we survey the permission systems of several popular smartphone operating systems and taxonomize them by the amount of control they give users, the amount of information they convey to users and the level of interactivity they require from users. We discuss the problem of permission overdeclaration and devise a set of goals that security researchers should aim for, as well as propose directions through which we hope the research community can attain those goals.
['Kathy Wain Yee Au', 'Yi Fan Zhou', 'Zhen Huang', 'Phillipa Gill', 'David Lie']
Short paper: a look at smartphone permission models
525,765
Performance Improvements and Congestion Reduction for Routing-based Synthesis for Digital Microfluidic Biochips
['Skyler Windh', 'Calvin Phung', 'Daniel T. Grissom', 'Paul Pop', 'Philip Brisk']
Performance Improvements and Congestion Reduction for Routing-based Synthesis for Digital Microfluidic Biochips
709,631
Mediator-learner Dyad - Cooperative Relationships
['Gilda Helena Bernardino de Campos', 'Gianna Oliveira Bogossian Roque']
Mediator-learner Dyad - Cooperative Relationships
644,099
This paper addresses the problem of optimum allocation of distributed real-time workflows with probabilistic service guarantees over a Grid of physical resources made available by a provider. The discussion focuses on how such a problem may be mathematically formalised, both in terms of constraints and objective function to be optimized, which also accounts for possible business rules for regulating the deployment of the workflows. The presented formal problem constitutes a probabilistic admission control test that may be run by a provider in order to decide whether or not it is worth to admit new workflows into the system, and to decide what the optimum allocation of the workflow to the available resources is. Various options are presented which may be plugged into the formal problem description, depending on the specific needs of individual workflows.
['Tommaso Cucinotta', 'Kleopatra Konstanteli', 'Theodora A. Varvarigou']
Advance reservations for distributed real-time workflows with probabilistic service guarantees
30,029
Drivers and Inhibitors for the Adoption of Public Cloud Services in Germany
['Patrick Lübbecke', 'Markus Siepermann', 'Richard Lackes']
Drivers and Inhibitors for the Adoption of Public Cloud Services in Germany
855,887
This paper presents different methods to solve short-term load forecasting problem in smart grids. Smart grid, an electrical network can be monitored and managed. Effective and efficient use of energy and a low-cost planning-oriented management are required in smart grids. One of the most important helpers for power management is to forecast load correctly. Load forecasting, demand response and energy prices affect each other. At the same time the load forecasting has an important role for safely operating conditions and power systems control. In this study, the effects of load forecasting in a real network operating conditions are examined. Also a time series mathematical model and heuristic models have been developed for load forecasting. Performances of the forecasting of the models developed were compared with the actual values.
['Nurettin Cetinkaya']
A new mathematical approach and heuristic methods for load forecasting in smart grid
916,348
Many applications, manipulation or just visualization of discrete volumes are time consuming problems. The general idea to solve these difficulties is to transform, in a reversible way, those volumes into Euclidean polyhedra. A first step of this process consists in a Digital Plane Segmentation of the discrete object's surface. In this paper, we present an algorithm to construct an optimal, in the number of vertices, discrete volume polyhedral representation (i.e. vertices and faces adjacencies).
['Isabelle Sivignon', 'David Coeurjolly']
From digital plane segmentation to polyhedral representation
852,646
We first describe how several existing software reliability growth models based on nonhomogeneous Poisson processes (NHPPs) can be derived based on a unified theory for NHPP models. Under this general framework, we can verify existing NHPP models and derive new NHPP models. The approach covers a number of known models under different conditions. Based on these approaches, we show a method of estimating and computing software reliability growth during the operational phase. We can use this method to describe the transitions from the testing phase to operational phase. That is, we propose a method of predicting the fault detection rate to reflect changes in the user's operational environments. The proposed method offers a quantitative analysis on software failure behavior in field operation and provides useful feedback information to the development process.
['Chin-Yu Huang', 'Sy-Yen Kuo', 'Michael R. Lyu', 'Jung-Hua Lo']
Quantitative software reliability modeling from testing to operation
190,452
This paper presents graphical tools that facilitate manual building of formal semantic annotations for digital objects. These tools are intended to be integrated in digital object (DO) management systems requiring semantic metadata (e.g. precise indexation, comment, categorization, certification). We define a multidimensional graphical model of semantic annotations that specifically allows contextualization of annotations, and we propose a methodology for building such annotations. The graph-based formalism used (derived from Conceptual Graphs), provides graphical representations that users can easily understand, and are furthermore logically founded. A graphical generic API implementing elements of the SG family has been developed. CoGUI consists of three specialized tools: a tool for defining a CG ontology, another for creating annotations based on a CG ontology, and finally a tool that uses the CoGITaNT platform, for querying an annotated DO base.
['Nicolas Moreau', 'Michel Leclère', 'Michel Chein', 'Alain Gutierrez']
Formal and graphical annotations for digital objects
165,231
In 1985 Aumann axiomatized the Shapley NTU value by non-emptiness, efficiency, unanimity, scale covariance, conditional additivity, and independence of irrelevant alternatives. We show that, when replacing unanimity by "unanimity for the grand coalition" and translation covariance, these axioms characterize the Nash solution on the class of n-person choice problems with reference points. A classical bargaining problem consists of a convex feasible set that contains the disagreement point here called reference point. The feasible set of a choice problem does not necessarily contain the reference point and may not be convex. However, we assume that it satisfies some standard properties. Our result is robust so that the characterization is still valid for many subclasses of choice problems, among those is the class of classical bargaining problems. Moreover, we show that each of the employed axioms – including independence of irrelevant alternatives – may be logically independent of the remaining axioms.
['Peter Sudhölter', 'José Manuel Zarzuelo']
Extending the Nash solution to choice problems with reference points
633,189
Identity-Based Hybrid Signcryption.
['Fagen Li', 'Masaaki Shirase', 'Tsuyoshi Takagi']
Identity-Based Hybrid Signcryption.
778,568
In this work, we investigate a general performance evaluation framework for network selection strategies (NSSs) that are used in 3G-WLAN interworking networks. Instead of simulation, this framework is based on models of NSSs and is constructed using a stochastic process algebra, named Performance Evaluation Process Algebra (PEPA). It captures the traffic and mobility characteristics of mobile nodes in 3G-WLAN interworking networks and has a good expression of the behavior of the mobile nodes using different NSSs. Commonly used NSSs are evaluated from the perspectives of average throughput, handover rate, and network blocking probability. Results of the evaluation explore the effect of these NSSs on both mobile nodes and networks, as well as their characteristics in different mobility and traffic scenarios.
['Hao Wang', 'David Laurenson', 'Jane Hillston']
A General Performance Evaluation Framework for Network Selection Strategies in 3G-WLAN Interworking Networks
89,483
We propose to use multiscale entropy analysis in characterisation of network traffic and spectrum usage. We show that with such analysis one can quantify complexity and predictability of measured traces in widely varying timescales. We also explicitly compare the results from entropy analysis to classical characterisations of scaling and self-similarity in time series by means of fractal dimension and the Hurst parameter. Our results show that the used entropy analysis indeed complements these measures, being able to uncover new information from traffic traces and time series models. We illustrate the application of these techniques both on time series models and on measured traffic traces of different types. As potential applications of entropy analysis in the networking area, we highlight and discuss anomaly detection and validation of traffic models. In particular, we show that anomalous network traffic can have significantly lower complexity than ordinary traffic, and that commonly used traffic and time series models have different entropy structures compared to the studied traffic traces. We also show that the entropy metrics can be applied to the analysis of wireless communication and networks. We point out that entropy metrics can improve the understanding of how spectrum usage changes over time and can be used to enhance the efficiency of dynamic spectrum access networks. but necessarily ignores large amount of information possibly present in the empirical time series (for a related discussion in an information-theoretic context, see (8)). We show through numerous examples that the introduced metrics can indeed uncover several types of structure from observed traffic, and offer complementary information to the classical metrics of time series analysis. In order to demonstrate that these complexity metrics pro- vide new information about time series, we explicitly com- pare the behaviour of multiscale entropies to usual metrics quantifying scaling and self-similarity. We carry out this comparison using both synthetic data traces generated from a number of well-known discrete time series models as well as measured traffic traces. The latter part is an extension of our earlier work (9). We also discuss the applications of these complexity metrics, focusing in particular on validation of synthetic traffic models and anomaly detection. Based on experimental data we show that there is a significant difference in the complexity of anomalous network traffic compared to regular aggregate traffic observed at a router of an Internet service provider. In addition to the classical applications in traffic analysis, we also study measured traces of wireless spectrum usage for self-similarity and compare our findings to results from multiscale entropy analysis. We specifically show that the methods applied here provide insight in the difference between noise and interference and are capable of discovering reoccurring structures in the data.
['Janne Riihijarvi', 'Matthias Wellens', 'Petri Mahonen']
Measuring Complexity and Predictability in Networks with Multiscale Entropy Analysis
134,554
The relationship between the family of DCT's and DST's of original sequence and shifted sequences is developed. While these properties are not as simple as the case for the DFT, they are still useful for processing long streams of data sequences where time-varying filtering is required.
['P. Yip', 'K. R. Rao']
On the shift property of DCT's and DST's
500,101
Let P 2 P 2 denote the projective plane over a finite field F q F q . A pair of nonsingular conics (A,B) ( A , B ) in the plane is said to satisfy the Poncelet triangle condition if, considered as conics in P 2 ( F ‾ q ) , they intersect transversally and there exists a triangle inscribed in A A and circumscribed around B B . It is shown in this article that a randomly chosen pair of conics satisfies the triangle condition with asymptotic probability 1/q 1 / q . We also make a conjecture based upon computer experimentation which predicts this probability for tetragons, pentagons and so on up to enneagons.
['Jaydeep Chipalkatti']
On the Poncelet triangle condition over finite fields
708,231
Genetic and Evolutionary Feature Extraction (GEFE), introduced by Shelton et al. [1], [2], [3], use genetic and evolutionary computation to evolve Local Binary Pattern (LBP) based feature extractors for facial recognition. In this paper, we use GEFE in an effort to classify male and female Drosophila melanogaster by the texture of their wings. To our knowledge, gender classification of the drosophila melanogaster via its wing has not been performed. This research has the potential to simplify the work of geneticists who work with the drosophila melanogaster. Our results show that GEFE outperforms both LBP and Eigenwing methods in terms of accuracy as well as computational complexity.
['Michael E. Payne', 'Jonathan Turner', 'Joseph Shelton', 'Joshua Adams', 'Joi Carter', 'Henry Williams', 'Caresse Hansen', 'Ian Dworkin', 'Gerry V. Dozier']
Fly wing biometrics
482,214
In this paper, we seek control strategies for legged robots that produce resulting kinetics and kinematics that are both stable and biologically realistic. Recent biomechanical investigations have found that spin angular momentum is highly regulated in human standing, walking and running. Motivated by these biomechanical findings, we argue that biomimetic control schemes should explicitly control spin angular momentum, minimizing spin and CM torque contributions not only local in time but throughout movement tasks. Assuming a constant and zero spin angular momentum, we define the zero spin center of pressure (ZSCP) point. For human standing control, we show experimentally and by way of numerical simulation that as the ZSCP point moves across the edge of the foot support polygon, spin angular momentum control changes from regulation to non-regulation. However, even when the ZSCP moves beyond the foot support polygon, stability can be achieved through the generation of restoring CM forces that reestablish the CM position over the foot support polygon. These results are interesting because they suggest that different control strategies are utilized depending on the location of the ZSCP point relative to the foot support polygon.
['Marko B. Popovic', 'Andreas Hofmann', 'Hugh M. Herr']
Zero spin angular momentum control: definition and applicability
924,320
Detecting New Evidence for Evidence-Based Guidelines Using a Semantic Distance Method
['Qing Hu', 'Zhisheng Huang', 'Annette ten Teije', 'Frank van Harmelen']
Detecting New Evidence for Evidence-Based Guidelines Using a Semantic Distance Method
672,033
This paper considers the problem of n-player conflict modeling, arising due to competition over resources. Each player represents a distinct group of people and has some resource and power. A player may either attack other players(i.e., groups) to obtain their resources or do nothing. We present a game-theoretical model for interaction between the players, and show that key questions of interest to policy makers can be answered efficiently, i.e., in polynomial time in the number of players. They are: (1) Given the resources and the power of each group, is no-war a stable situation? and (2) Assuming there are some conflicts already in the society, is there a danger of other groups not involved in the conflict joining the conflict and further degrading the current situation? We show that the pure strategy Nash equilibrium is not an appropriate solution concept for our problem and introduce a refinement of then ash equilibrium called the asymmetric equilibrium. We also provide an algorithm (that is exponential in the number of players) to compute all the asymmetric equilibria and propose heuristics to improve the performance of the algorithm.
['Noam Hazon', 'Nilanjan Chakraborty', 'Katia P. Sycara']
Game Theoretic Modeling and Computational Analysis of N-Player Conflicts over Resources
925,425
The learning technology standardization process is one of the key research activities in computer-based education. Institutions like the IEEE, the US Department of Defense and the European Commission have set up committees to deliver recommendations and proposals in this area. The objective is to allow the reuse of learning resources and to offer interoperability among heterogeneous e-learning systems. The first part of this paper is devoted to the presentation of an up-to-date survey on one of the most prolific fields of the learning technology standardization: educational metadata. The second part shows how these data models are applied by actual software systems to facilitate the location of learning resources. Educational brokerage is a promising field that lets learners find those computer-based training resources that best fit their needs. We identify the main actors involved, their roles, and open issues and trends.
['Luis Anido', 'Manuel J. Fernández', 'Manuel Caeiro', 'Juan M. Santos', 'Judith S. Rodríguez', 'Martin Llamas']
Educational metadata and brokerage for learning resources
2,952
Water vapor plays the key role in the global hydrologic cycle and climate change. However, the distribution and variability of water vapor in the troposphere is not understood well in the globe, particularly the high-resolution variation. In this paper, 13-year 2-h precipitable water vapors (PWV) are derived from globally distributed 155 Global Positioning System sites observations and global three-hourly surface weather data and six-hourly National Centers for Environmental Prediction/National Center for Atmospheric Research reanalysis products, which are the first used to investigate multiscale water-vapor variability on a global scale. It has been found that the distinct seasonal cycles are in summer with a maximum water vapor and in winter with a minimum water vapor. The higher amplitudes of annual PWV variations are located in midlatitudes with about 10-20 plusmn 0.5 mm, and the lower amplitudes are found in high latitudes and equatorial areas with about 5 plusmn 0.5 mm. The larger differences of mean PWV between in summer and winter are located in midlatitudes with about 10-30 mm, particularly in the Northern Hemisphere. The semiannual variation amplitudes are relatively weaker with about 0.5 plusmn 0.2 mm. In addition, significant diurnal variations of PWV are found over most International Global Navigation Satellite Systems Service stations. The diurnal (24 h) cycle has amplitude of 0.2-1.2 plusmn 0.1 mm, and the peak time is from the noon to midnight. The semidiurnal (12 h) cycle is weaker, with amplitude of less than 0.3 mm.
['Shuanggen Jin', 'Oskar Luo']
Variability and Climatology of PWV From Global 13-Year GPS Observations
408,447
Traditional dictionary learning algorithms are used for finding a sparse representation on high dimensional#R##N#data by transforming samples into a one-dimensional (1D)#R##N#vector. This 1D model loses the inherent spatial structure property of data. An alternative solution is to employ Tensor Decomposition for dictionary learning on their original structural form —a tensor— by learning multiple dictionaries along each mode and the corresponding sparse representation in respect to the Kronecker product of these dictionaries. To learn tensor#R##N#dictionaries along each mode, all the existing methods update each dictionary iteratively in an alternating manner. Because atoms from each mode dictionary jointly make contributions to the sparsity of tensor, existing works ignore atoms correlations between different mode dictionaries by treating each mode dictionary independently. In this paper, we propose a joint multiple dictionary learning method for tensor sparse coding,#R##N#which explores atom correlations for sparse representation and updates multiple atoms from each mode dictionary simultaneously. In this algorithm, the Frequent-Pattern Tree (FP-tree) mining algorithm is employed to exploit frequent atom patterns in the sparse representation. Inspired by the idea of K-SVD, we develop a new dictionary update method that jointly updates#R##N#elements in each pattern. Experimental results demonstrate our method outperforms other tensor based dictionary learning algorithms.
['Yifan Fu', 'Junbin Gao', 'Yanfeng Sun', 'Xia Hong']
Joint multiple dictionary learning for Tensor sparse coding
525,660
Cognitive radio has the ability to sense the environment and adapt its behavior to optimize communication features, such as quality of service in the presence of interference and noise. In physical layer to achieve this goal, different phases of sensing, channel estimation, and configuration selection are necessary. The sensing part measures the interference level, recognize the spectrum holes, and send this information to Channel estimator. In the next step, channel state information (CSI) is used for data detection and also sent to the transmitter through a limited feedback. CSI feedback consists of achievable rate, SNR value, modulation or coding schemes (MCS). Feedback link improves the system performance in the cost of complexity and delay. In this paper, we present and compare different feedback schemes for cognitive radio and study the channel capacity when an imperfect feedback link is corrupted by noise and delay.
['Haleh Hosseini', 'Anis Izzati Ahmad Zamani', 'Sharifah Kamilah Syed Yusof', 'Norsheila bt. Fisal']
CSI Feedback Model in the Context of Adaptive Cognitive Radio Systems
404,847
Preface to SCP special issue with extended selected papers from SBMF 2014
['Christiano Braga', 'Narciso Martí-Oliet']
Preface to SCP special issue with extended selected papers from SBMF 2014
894,961
The reuse of existing infrastructure whilst keeping unaffected the legacy equipment is a crucial feature for the evolution of gigabit passive optical networks (GPONs) towards higher capacity wavelength division multiplexed-passive optical networks (WDM-PONs). In this paper a testbed for the evaluation of simple and cost effective GPON evolutions will be presented. A comprehensive set of WDM-PON architectures based on remote wavelengths distribution and wavelengths reuse for upstream (US) transmission will be proposed. Both 2.5Gb/s and 10Gb/s (symmetric traffic) have been taken into account and experimentally evaluated in terms of bit error rate (BER) on the same testing bench. System perspectives and limitations are also discussed.
['Gianluca Berrettini', 'Luca Giorgi', 'Filippo Ponzini', 'Fabio Cavaliere', 'Pierpaolo Ghiggino', 'Luca Poti', 'Antonella Bogoni']
Testbed for experimental analysis on seamless evolution architectures from GPON to high capacity WDM-PON
71,679
Investigation of Combining Various Major Language Model Technologies including Data Expansion and Adaptation
['Ryo Masumura', 'Taichi Asami', 'Takanobu Oba', 'Hirokazu Masataki', 'Sumitaka Sakauchi', 'Akinori Ito']
Investigation of Combining Various Major Language Model Technologies including Data Expansion and Adaptation
897,369
Spectral efficiency is a key design issue for all wireless communication systems. Orthogonal frequency division multiplexing (OFDM) is a very well-known technique for efficient data transmission over many carriers overlapped in frequency. Recently, several studies have appeared that describe spectrally efficient variations of multi-carrier systems where the condition of orthogonality is dropped. Proposed techniques suffer from two weaknesses: firstly, the complexity of generating the signal is increased. Secondly, the signal detection is computationally demanding. Known methods suffer either unusably high complexity or high error rates because of the inter-carrier interference. This study addresses both problems by proposing new transmitter and receiver architectures whose design is based on using the simplification that a rational spectrally efficient frequency division multiplexing (SEFDM) system can be treated as a set of overlapped and interleaving OFDM systems. The efficacy of the proposed designs is shown through detailed simulation of systems with different signal types and carrier dimensions. The decoder is heuristic but in practice produces very good results that are close to the theoretical best performance in a variety of settings. The system is able to produce efficiency gains of up to 20% with negligible impact on the required signal-to-noise ratio.
['Richard G. Clegg', 'Safa Isam', 'Ioannis Kanaras', 'Izzat Darwazeh']
A practical system for improved efficiency in frequency division multiplexed wireless networks
301,067
Intrusion Detection Systems (IDSs): Implementation.
['E Eugene Schultz', 'Eugene H. Spafford']
Intrusion Detection Systems (IDSs): Implementation.
805,715
This paper introduces a statistical methodology for the identification of differentially expressed genes in DNA microarray experiments based on multiple criteria. These criteria are false discovery rate (FDR), variance-normalized differential expression levels (paired t statistics), and minimum acceptable difference (MAD). The methodology also provides a set of simultaneous FDR confidence intervals on the true expression differences. The analysis can be implemented as a two-stage algorithm in which there is an initial screen that controls only FDR, which is then followed by a second screen which controls both FDR and MAD. It can also be implemented by computing and thresholding the set of FDR P values for each gene that satisfies the MAD criterion. We illustrate the procedure to identify differentially expressed genes from a wild type versus knockout comparison of microarray data.
['Alfred O. Hero', 'Gilles Fleury', 'Alan J. Mears', 'Anand Swaroop']
Multicriteria gene screening for analysis of differential expression with DNA microarrays
126,056
We first show how reproducing kernel Hilbert space (RKHS) norms can be determined for a large class of covariance functions by methods based on the solution of a Riccati differential equation or a Wiener-Hopf integral equation. Efficient numerical algorithms for such equations have been extensively studied, especially in the control literature. The innovations representations enter in that it is they that suggest the form of the RKHS norms. From the RKHS norms, we show how recursive solutions can be obtained for certain Fredholm equations of the first kind that are widely used in certain approaches to detection theory. Our approach specifies a unique solution: moreover, the algorithms used are well suited to the treatment of increasing observation intervals.
['Thomas Kailath', 'Roger T. Geesey', 'Howard L. Weinert']
Some relations among RKHS norms, Fredholm equations, and innovations representations
155,934
Much is known about the differences in expressiveness and succinctness between nondeterministic and deterministic automata on infinite words. Much less is known about the relative succinctness of the different classes of nondeterministic automata. For example, while the best translation from a nondeterministic Biichi automaton to a nondeterministic co-Biichi automaton is exponential, and involves determinization, no super-linear lower bound is known. This annoying situation, of not being able to use the power of nondeterminism, nor to show that it is powerless, is shared by more problems, with direct applications in formal verification. In this paper we study a family of problems of this class. The problems originate from the study of the expressive power of deterministic Biichi automata: Landweber characterizes languages L ⊆ Σ ω that are recognizable by deterministic Biichi automata as those for which there is a regular language R C Σ * such that L is the limit of R; that is, w ∈ L iff ω has infinitely many prefixes in R. Two other operators that induce a language of infinite words from a language of finite words are co-limit, where w ∈ L iff w has only finitely many prefixes in R, and persistent-limit, where w ∈ L iff almost all the prefixes of ware in R. Both co-limit and persistent-limit define languages that are recognizable by deterministic co-Buchi automata. They define them, however, by means of nondeterministic automata. While co-limit is associated with complementation, persistent-limit is associated with universality. For the three limit operators, the deterministic automata for R and L share the same structure. It is not clear, however, whether and how it is possible to relate nondeterministic automata for R and L, or to relate nondeterministic automata to which different limit operators are applied. In the paper, we show that the situation is involved: in some cases we are able to describe a polynomial translation, whereas in some we present an exponential lower bound. For example, going from a nondeterministic automaton for R to a nondeterministic automaton for its limit is polynomial, whereas going to a nondeterministic automaton for its persistent limit is exponential. Our results show that the contribution of nondeterminism to the succinctness of an automaton does depend upon its semantics.
['Benjamin Aminof', 'Orna Kupferman']
On the Succinctness of Nondeterminism
892,522
The increasing prevalence of psychological distress disorders, such as depression and post-traumatic stress, necessitates a serious effort to create new tools and technologies to help with their diagnosis and treatment. In recent years, new computational approaches were proposed to objectively analyze patient non-verbal behaviors over the duration of the entire interaction between the patient and the clinician. In this paper, we go beyond non-verbal behaviors and propose a tri-modal approach which integrates verbal behaviors with acoustic and visual behaviors to analyze psychological distress during the course of the dyadic semi-structured interviews. Our approach exploits the advantages of the dyadic nature of these interactions to contextualize the participant responses based on the affective components (intimacy and polarity levels) of the questions. We validate our approach using one of the largest corpus of semi-structured interviews for distress assessment which consists of 154 multimodal dyadic interactions. Our results show significant improvement on distress prediction performance when integrating verbal behaviors with acoustic and visual behaviors. In addition, our analysis shows that contextualizing the responses improves the prediction performance, most significantly with positive and intimate questions.
['Sayan Ghosh', 'Moitreya Chatterjee', 'Louis-Philippe Morency']
A Multimodal Context-based Approach for Distress Assessment
437,954
In challenging environment, sensory data must be stored inside the network in case of sink failures. Since all sensor nodes have limited storage capacity and energy, not all the data can be stored. In order to preserve more data, we need to redistribute partial data items from storage-depleted source nodes to sensor nodes with available storage space and residual energy. In this paper, we study the data redistribution and retrieval with priority problem (DRRP) and try to preserve data that maximizing the total preserved data priorities, while minimizing data redistribution and retrieval costs. By utilizing graph transformation, we convert this problem into a minimum cost maximum weighted flow problem. In order to find the optimal solution, we propose an improved linear programming algorithm named E2 DP2 ( energy-efficient data preservation with priority). Through extensive simulations, we show that the proposed algorithm outperforms the other two traditional algorithms in terms of energy consumption.
['Qiong Yi', 'Jun Wang', 'Chuang Liu']
Energy-efficient data storage solutions under sink failures
821,622
We describe a source and channel coding algorithm suitable for video messaging over wireless networks. The research platform is based on ITU-T H.263 video coding, Reed Solomon channel coding, data interleaving and a wireless network provided by Lucent's WaveLAN. We consider the special characteristics of one-way video messaging, and propose rate control, unequal error protection and error concealment algorithms with features for object-oriented quality enhancement. These features consist of: (1) object-oriented bit allocation based on the segmentation of video into objects of interest, (2) unequal channel error protection by object-oriented data partitioning, and (3) object-oriented error concealment using the locations and motion vectors of object. Based on experiments with head and shoulders scenes, the subjective quality and error resilience of decoded video are seen to increase significantly. Simulation results show that the proposed algorithm is effective at high packet loss rates and with clustered packet losses.
['Seong Hwan Jang', 'Nikil Jayant']
Object-oriented source and channel coding for video messaging applications over wireless networks
366,130
Continuous Gaze Cursor Feedback in Various Tasks: Influence on Eye Movement Behavior, Task Performance and Subjective Distraction
['Sven-Thomas Graupner', 'Sebastian Pannasch']
Continuous Gaze Cursor Feedback in Various Tasks: Influence on Eye Movement Behavior, Task Performance and Subjective Distraction
120,682
In this paper, we propose a novel hybrid intelligent system (HIS) which provides a unified integration of numerical and linguistic knowledge representations. The proposed HIS is a hierarchical integration of an incremental learning fuzzy neural network (ILFN) and a linguistic model, i.e., fuzzy expert system (FES), optimized via the genetic algorithm (GA). The ILFN is a self-organizing network. The linguistic model is constructed based on knowledge embedded in the trained ILFN or provided by the domain expert. The knowledge captured from the low-level ILFN can be mapped to the higher level linguistic model and vice versa. The GA is applied to optimize the linguistic model to maintain high accuracy, comprehensibility, completeness, compactness, and consistency. The resulted HIS is capable of dealing with low-level numerical computation and higher level linguistic computation. After the system is completely constructed, it can incrementally learn new information in both numerical and linguistic forms. To evaluate the system's performance, the well-known benchmark Wisconsin breast cancer data set was studied for an application to medical diagnosis. The simulation results have shown that the proposed HIS performs better than the individual standalone systems. The comparison results show that the linguistic rules extracted are competitive with or even superior to some well-known methods. Our interest is not only on improving the accuracy of the system, but also enhancing the comprehensibility of the resulted knowledge representation.
['Phayung Meesad', 'Gary G. Yen']
Combined numerical and linguistic knowledge representation and its application to medical diagnosis
422,407
Minimum Weight Polygon Triangulation Problem in Sub-Cubic Time Bound
['Sung Eun Bae', 'Tong-Wook Shinn', 'Tadao Takaoka']
Minimum Weight Polygon Triangulation Problem in Sub-Cubic Time Bound
925,010
The Fundamental Solution to the Wright-fisher Equation
['Daniel W. Stroock', 'Linan Chen']
The Fundamental Solution to the Wright-fisher Equation
592,322
This paper presents a new modeling of elevator group intelligent scheduling system with destination floor guidance. The traditional input mode of separate hall call registration and the destination selection is improved to a single-input mode. On this basis, dynamic partition method in up-peak traffic is studied. This method means dynamically adjust division of floor region based on flow rate and distribution of passenger. Dynamic programming algorithm is used to solve this problem. Through regroupment and classification for ensemble of hall call communication, prediction of multi-objective evaluation items is proposed. Fuzzy Neural Network is constructed and applied further to realize the optimal scheduling policy. Simulated results show that the presented elevator model is effective and the optimized scheduling algorithm has advantaged improvement for overall performance of elevator system.
['Suying Yang', 'Jianzhe Tai', 'Cheng Shao']
Dynamic Partition of Elevator Group Control System with Destination Floor Guidance in Up-peak Traffic
349,799
A new approach to block-based motion estimation for video compression, called the genetic motion search (GMS) algorithm, is introduced. It makes use of a natural processing concept called genetic algorithm (GA). In contrast to existing fast algorithms, which rely on the assumption that the matching error decreases monotonically as the search point moves closer to the global optimum, the GMS algorithm is not fundamentally limited by this restriction. Experimental results demonstrate that GMS is more robust than other algorithms in locating the global optimum and is computationally simpler than the full search algorithm. GMS is also suitable for VLSI implementation because of its regularity and high architectural parallelism. >
['K. Hung-Kei Chow', 'Ming L. Liou']
Genetic motion search algorithm for video compression
335,222