abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
The absence of clarity concerning the licensing terms does not encourage the reuse of the data by the consumers, thus preventing further publication and consumption of datasets, at the expense of the growth of the Web of Data itself. In this paper, we propose a general framework to attach the licensing terms to the data queried on the Web of Data. In particular, our framework addresses the following issues: (i) the various license schemas are collected and aligned taking as reference the Creative Commons schema, (ii) the compatibility of the licensing terms concerning the data affected by the query is verified, and (iii) if compatible, the licenses are combined into a composite license. The framework returns the composite license as licensing term about the data resulting from the query.
['Serena Villata', 'Fabien Gandon']
Licenses compatibility and composition in the web of data
588,826
Voting among different agents is a powerful tool in problem solving, and it has been widely applied to improve the performance in finding the correct answer to complex problems. We present a novel benefit of voting, that has not been observed before: we can use the voting patterns to assess the performance of a team and predict their final outcome. This prediction can be executed at any moment during problem-solving and it is completely domain independent. Hence, it can be used to identify when a team is failing, allowing an operator to take remedial procedures (such as changing team members, the voting rule, or increasing the allocation of resources). We present three main theoretical results: (1) we show a theoretical explanation of why our prediction method works; (2) contrary to what would be expected based on a simpler explanation using classical voting models, we show that we can make accurate predictions irrespective of the strength (i.e., performance) of the teams, and that in fact, the prediction can work better for diverse teams composed of different agents than uniform teams made of copies of the best agent; (3) we show that the quality of our prediction increases with the size of the action space. We perform extensive experimentation in two different domains: Computer Go and Ensemble Learning. In Computer Go, we obtain high quality predictions about the final outcome of games. We analyze the prediction accuracy for three different teams with different levels of diversity and strength, and show that the prediction works significantly better for a diverse team. Additionally, we show that our method still works well when trained with games against one adversary, but tested with games against another, showing the generality of the learned functions. Moreover, we evaluate four different board sizes, and experimentally confirm better predictions in larger board sizes. We analyze in detail the learned prediction functions, and how they change according to each team and action space size. In order to show that our method is domain independent, we also present results in Ensemble Learning, where we make online predictions about the performance of a team of classifiers, while they are voting to classify sets of items. We study a set of classical classification algorithms from machine learning, in a data-set of hand-written digits, and we are able to make high-quality predictions about the final performance of two different teams. Since our approach is domain independent, it can be easily applied to a variety of other domains.
['Leandro Soriano Marcolino', 'Aravind S. Lakshminarayanan', 'Vaishnavh Nagarajan', 'Milind Tambe']
Every team deserves a second chance: an extended study on predicting team performance
905,869
We investigate the dividend optimization problem for a company whose surplus process is modeled by a regime-switching compound Poisson model with credit and debit interest. The surplus process is controlled by subtracting the cumulative dividends. The performance of a dividend distribution strategy which determines the timing and amount of dividend payments, is measured by the expectation of the total discounted dividends until ruin. The objective is to identify an optimal dividend strategy which attains the maximum performance. We show that a regime-switching band strategy is optimal.
['Jinxia Zhu']
Singular optimal dividend control for the regime-switching Cramér-Lundberg model with credit and debit interest
351,078
The relationship between ocean wind vectors and L-band normalized radar cross sections (NRCS) is examined to modify the L-band geophysical model function (GMF) using the Phased-Array L-band Synthetic Aperture Radar 2 (PALSAR-2) and scatterometer wind data. The difference between the present study and previous one using the PALSAR is incidence angle range and polarimetry. Since the PALSAR-2 has a function of dual polarized ScanSAR, the sensitivity of L-band HV on wind fields is also examined. Though the HV signal shows wind speed and incidence angle dependencies, they are too noisy to be used for wind detection. Based on more than 20,000 match-ups, the L-band HH NRCS dependence on wind speed, incidence angle, and relative wind direction is modeled for the 0–20 m/s wind speed range. Wind speeds are then inversely estimated from the mutch-ups and compared with the reference winds. The accuracy shows incidence-angle-dependency. The detection at the lowest incidence angles of 25–30 degrees gets the best result, the 0.23 m/s bias and 2.33 m/s root mean square (rms) error.
['Osamu Isoguchi', 'Masanobu Shimada']
Detection of wind fields from PALSAR-2
933,155
This paper describes the integration of children into the analysis phase of a User-Centered Design approach for game development. We used the probes approach to collect qualitative and quantitative data effectively with the help of children. Our approach aimed at investigating gaming behaviors and requirements from children. Therefore, children did not only take part in the probes study, but also assisted in the development and design of the probes material. Additionally, we demonstrate the possibility of using the collected data of the probes material as a basis to create child personas.
['Christiane Moser', 'Verena Fuchsberger', 'Manfred Tscheligi']
Using probes to create child personas for games
359,486
A Comparative Study: Use of a Brain-Computer Interface (BCI) Device by People with Cerebral Palsy in Interaction with Computers
['Regina de Oliveira Heidrich', 'Francisco Rebelo', 'Marsal Avila Alves Branco', 'João Batista Mossmann', 'Anderson Schuh', 'Emely Jensen', 'Tiago Oliveira']
A Comparative Study: Use of a Brain-Computer Interface (BCI) Device by People with Cerebral Palsy in Interaction with Computers
619,728
It is well-known that industrial robots are not very accurate. Robot calibration, which is one of the key techniques in robot off-line programming (OLP) as well as in robotics, is very helpful to increase the accuracy of robot motion. To solve the problem of base frame calibration for coordinated multi-robot systems, this paper proposes a simple and practical method, which is improved by three points calibration. It determines the base frame relationship for multi-robot systems. With laser sensors and a buzzer, the degree of accuracy has been enhanced. In order to integrate the process and the algorithm of the calibration method, a software has been developed. Experiment results have verified the validity and effectiveness of the proposed method.
['Huajian Deng', 'Hongmin Wu', 'Cao Yang', 'Yisheng Guan', 'Hong Zhang', 'Jianling Liu']
Base frame calibration for multi-robot coordinated systems
654,190
Investigation of Speed-Accuracy Tradeoffs in Speech Production Using Real-Time Magnetic Resonance Imaging.
['Adam C. Lammert', 'Christine H. Shadle', 'Shrikanth Narayanan', 'Thomas F. Quatieri']
Investigation of Speed-Accuracy Tradeoffs in Speech Production Using Real-Time Magnetic Resonance Imaging.
879,520
Open multi-agent systems (MASs) act as societies in which autonomous and heterogeneous agents can work towards similar or different goals. In order to cope with the heterogeneity, autonomy and diversity of interests among the different agents in the society, open MASs establish a set of behavioral norms that is used as a mechanism to ensure a state of cooperation among agents. Such norms regulate the behavior of the agents by defining obligations, permissions and prohibitions. Fulfillment of a norm may be encouraged through a reward while violation of a norm may be discouraged through punishment. Although norms are promising mechanisms to regulate an agent's behavior, we should note that each agent is an autonomous entity that is free to fulfill or violate each associated norm. Thus, agents can use different strategies when deciding to achieve their goals including whether to comply with their associated norms. Agents might choose to achieve their goals while ignoring their norms, thus overlooking the rewards or punishments they may receive. In contrast agents may choose to comply with all the norms although some of their goals may not be achieved. In this context, this paper proposes a framework for simulation of normative agents providing a basis for understanding the impacts of norms on agents.
['Marx L. Viana', 'Paulo S. C. Alencar', 'Donald D. Cowan', 'Everton T. Guimarães', 'Francisco J. P. Cunha', 'Carlos José Pereira de Lucena']
The Development of Normative Autonomous Agents: An Approach
641,935
Automatic Intervertebral Disc Localization and Segmentation in 3D MR Images Based on Regression Forests and Active Contours
['Martin Urschler', 'Kerstin Hammernik', 'Thomas Ebner', 'Darko Stern']
Automatic Intervertebral Disc Localization and Segmentation in 3D MR Images Based on Regression Forests and Active Contours
860,730
We show how to extract an arbitrary polynomial number of simultaneously hardcore bits from any one-way function. In the case the one-way function is injective or has polynomially-bounded pre-image size, we assume the existence of indistinguishability obfuscation (iO). In the general case, we assume the existence of differing-input obfuscation (diO), but of a form weaker than full auxiliary-input diO. Our construc- tion for injective one-way functions extends to extract hardcore bits on multiple, correlated inputs, yielding new D-PKE schemes. Of indepen- dent interest is a definitional framework for differing-inputs obfuscation in which security is parameterized by circuit-sampler classes.
['Mihir Bellare', 'Igors Stepanovs', 'Stefano Tessaro']
Poly-Many Hardcore Bits for Any One-Way Function and a Framework for Differing-Inputs Obfuscation
404,732
The mATHENA Inventory for Free Mobile Assistive Technology Applications
['Georgios Kouroupetroglou', 'Spyridon Kousidis', 'Paraskevi Riga', 'Alexandros Pino']
The mATHENA Inventory for Free Mobile Assistive Technology Applications
785,974
An Ontology Engineering Approach to Gamify Collaborative Learning Scenarios
['Geiser Chalco Challco', 'Dilvan A. Moreira', 'Riichiro Mizoguchi', 'Seiji Isotani']
An Ontology Engineering Approach to Gamify Collaborative Learning Scenarios
573,853
We introduce the concept of test vector chains, which allows us to obtain new test vectors from existing ones through single-bit changes without any test generation effort. We demonstrate that a test set T 0 has a significant number of test vector chains that are effective in increasing the numbers of detections of target faults, i.e., faults targeted during the generation of T 0 , as well as untargeted faults, i.e., faults that were not targeted during the generation of T 0 .
['Irith Pomeranz', 'Sudhakar M. Reddy']
Test vector chains for increased targeted and untargeted fault coverage
36,463
Low-rank matrix approximation has been widely adopted in machine learning applications with sparse data, such as recommender systems. However, the sparsity of the data, incomplete and noisy, introduces challenges to the algorithm stability ‐ small changes in the training data may significantly change the models. As a result, existing low-rank matrix approximation solutions yield low generalization performance, exhibiting high error variance on the training dataset, and minimizing the training error may not guarantee error reduction on the testing dataset. In this paper, we investigate the algorithm stability problem of low-rank matrix approximations. We present a new algorithm design framework, which (1) introduces new optimization objectives to guide stable matrix approximation algorithm design, and (2) solves the optimization problem to obtain stable low-rank approximation solutions with good generalization performance. Experimental results on real-world datasets demonstrate that the proposed work can achieve better prediction accuracy compared with both state-ofthe-art low-rank matrix approximation methods and ensemble methods in recommendation task.
['Dongsheng Li', 'Chao Chen', 'Qin Lv', 'Junchi Yan', 'Li Shang', 'Stephen M. Chu']
Low-Rank Matrix Approximation with Stability
818,594
Consistency Thresholds for Binary Symmetric Block Models
['Elchanan Mossel', 'Joe Neeman', 'Allan Sly']
Consistency Thresholds for Binary Symmetric Block Models
758,693
Photo enhancement refers to the process of increasing the aesthetic appeal of a photo, such as changing the photo aspect ratio and spatial recomposition. It is a widely used technique in the printing industry, graphic design, and cinematography. In this paper, we propose a unified and socially aware photo enhancement framework which can leverage the experience of photographers with various aesthetic topics (e.g., portrait and landscape). We focus on photos from the image hosting site Flickr, which has 87 million users and to which more than 3.5 million photos are uploaded daily. First, a tagwise regularized topic model is proposed to describe the aesthetic topic of each Flickr user, and coherent and interpretable topics are discovered by leveraging both the visual features and tags of photos. Next, a graph is constructed to describe the similarities in aesthetic topics between the users. Noticeably, densely connected users have similar aesthetic topics, which are categorized into different communities by a dense subgraph mining algorithm. Finally, a probabilistic model is exploited to enhance the aesthetic attractiveness of a test photo by leveraging the photographic experiences of Flickr users from the corresponding communities of that photo. Paired-comparison-based user studies show that our method performs competitively on photo retargeting and recomposition. Moreover, our approach accurately detects aesthetic communities in a photo set crawled from nearly 100000 Flickr users.
['Richang Hong', 'Luming Zhang', 'Dacheng Tao']
Unified Photo Enhancement by Discovering Aesthetic Communities From Flickr
605,040
Network measurement at 10+Gbps speeds imposes many restrictions on the resource consumption of the measurement application, making any filtering of input data highly desirable. Symmetric connection detection (SCD) is a method of filtering TCP sessions, passing only those sessions which become fully established. SCD can benefit network monitoring applications that are only interested fully established TCP connections by reducing processing requirements. Incomplete connection attempts, such as port scanning attempts, simply waste resources in many applications if they are not filtered. SCD filters out unsuccessful connection attempts using a combination of Bloom filters to track the state of connection establishment for every flow passing through a network device. Unsuccessful flows can be filtered out to a very high degree of accuracy, depending on the size of the Bloom filter and traffic rate, 99.5% is typical. Resource consumption, both memory and CPU is low. The core SCD algorithm is designed to work in high-speed routers, in real-time, and at line speed. Using an upper bound of 32 k bytes of RAM our experimental results indicate 99+% accuracy with 900,000 active flows.
['Brad Whitehead', 'Chung-Horng Lung', 'Peter Rabinovitch']
A TCP Connection Establishment Filter: Symmetric Connection Detection
139,200
Motivation: De novo assemblies of genomes remain one of the most challenging applications in next-generation sequencing. Usually, their results are incomplete and fragmented into hundreds of contigs. Repeats in genomes and sequencing errors are the main reasons for these complications. With the rapidly growing number of sequenced genomes, it is now feasible to improve assemblies by guiding them with genomes from related species. Results: Here we introduce AlignGraph, an algorithm for extending and joining de novo -assembled contigs or scaffolds guided by closely related reference genomes. It aligns paired-end (PE) reads and preassembled contigs or scaffolds to a close reference. From the obtained alignments, it builds a novel data structure, called the PE multipositional de Bruijn graph. The incorporated positional information from the alignments and PE reads allows us to extend the initial assemblies, while avoiding incorrect extensions and early terminations. In our performance tests, AlignGraph was able to substantially improve the contigs and scaffolds from several assemblers. For instance, 28.7–62.3% of the contigs of Arabidopsis thaliana and human could be extended, resulting in improvements of common assembly metrics, such as an increase of the N50 of the extendable contigs by 89.9–94.5% and 80.3–165.8%, respectively. In another test, AlignGraph was able to improve the assembly of a published genome (Arabidopsis strain Landsberg) by increasing the N50 of its extendable scaffolds by 86.6%. These results demonstrate AlignGraph’s efficiency in improving genome assemblies by taking advantage of closely related references. Availability and implementation: The AlignGraph software can be downloaded for free from this site: https://github.com/baoe/ AlignGraph.
['Ergude Bao', 'Tao Jiang', 'Thomas Girke']
AlignGraph: algorithm for secondary de novo genome assembly guided by closely related references.
11,501
We revisit the Raz-Safra plane-vs.-plane test and study the closely related cube vs. cube test. In this test the tester has access to a "cubes table" which assigns to every cube a low degree polynomial. The tester randomly selects two cubes (affine sub-spaces of dimension $3$) that intersect on a point $x\in \mathbf{F}^m$, and checks that the assignments to the cubes agree with each other on the point $x$. #R##N#Our main result is a new combinatorial proof for a low degree test that comes closer to the soundness limit, as it works for all $\epsilon \ge poly(d)/{\mathbf{F}}^{1/2}$, where $d$ is the degree. This should be compared to the previously best soundness value of $\epsilon \ge poly(m, d)/\mathbf{F}^{1/8}$. Our soundness limit improves upon the dependence on the field size and does not depend on the dimension of the ambient space. #R##N#Our proof is combinatorial and direct: unlike the Raz-Safra proof, it proceeds in one shot and does not require induction on the dimension of the ambient space. The ideas in our proof come from works on direct product testing which are even simpler in the current setting thanks to the low degree. #R##N#Along the way we also prove a somewhat surprising fact about connection between different agreement tests: it does not matter if the tester chooses the cubes to intersect on points or on lines: for every given table, its success probability in either test is nearly the same.
['Amey Bhangale', 'Irit Dinur', 'Inbal Navon']
Cube vs. Cube Low Degree Test
965,621
New transceiver technologies have emerged which enable power efficient communication over very long distances. Examples of such Low-Power Wide-Area Network (LPWAN) technologies are LoRa, Sigfox and Weightless. A typical application scenario for these technologies is city wide meter reading collection where devices send readings at very low frequency over a long distance to a data concentrator (one-hop networks). We argue that these transceivers are potentially very useful to construct more generic Internet of Things (IoT) networks incorporating multi-hop bidirectional communication enabling sensing and actuation. Furthermore, these transceivers have interesting features not available with more traditional transceivers used for IoT networks which enable construction of novel protocol elements. In this paper we present a performance and capability analysis of a currently available LoRa transceiver. We describe its features and then demonstrate how such transceiver can be put to use efficiently in a wide-area application scenario. In particular we demonstrate how unique features such as concurrent non-destructive transmissions and carrier detection can be employed. Our deployment experiment demonstrates that 6 LoRa nodes can form a network covering 1.5 ha in a built up environment, achieving a potential lifetime of 2 year on 2 AA batteries and delivering data within 5 s and reliability of 80%.
['Martin C. Bor', 'John Vidler', 'Utz Roedig']
LoRa for the Internet of Things
671,323
While bandwidth predictability has been well studied in static environments, it remains largely unexplored in the context of mobile computing. To gain a deeper understanding of this important issue in the mobile environment, we conducted an eight-month measurement study consisting of 71 repeated trips along a 23Km route in Sydney under typical driving conditions. To account for the network diversity, we measure bandwidth from two independent cellular providers implementing the popular High-Speed Downlink Packet Access (HSDPA) technology in two different peak access rates (1.8 and 3.6Mbps). Interestingly, we observe no significant correlation between the bandwidth signals at different points in time within a given trip. This observation eventually leads to the revelation that the popular time series models, e.g. the Autoregressive and Moving Average, typically used to predict network traffic in static environments are not as effective in capturing the regularity in mobile bandwidth. Although the bandwidth signal in a given trip appears as a random white noise, we are able to detect the existence of patterns by analyzing the distribution of the bandwidth observed during the repeated trips. We quantify the bandwidth predictability reflected by these patterns using tools from information theory, entropy in particular. The entropy analysis reveals that the bandwidth uncertainty may reduce by as much as 46% when observations from past trips are accounted for. We further demonstrate that the bandwidth in mobile computing appears more predictable when location is used as a context. All these observations are consistent across multiple independent providers offering different data transfer rates using possibly different networking hardware.
['Jun Yao', 'Salil S. Kanhere', 'Mahbub Hassan']
An empirical study of bandwidth predictability in mobile computing
187,776
The potential of patient-centred care and a connected eHealth ecosystem can be developed through socially responsible innovative architectures. The purpose of this paper is to define key innovation needs. This is achieved through conceptual development of an architecture for common information spaces with emergent end-user applications by supporting intelligent processing of measurements, data and services at the Internet of Things (IoT) integration level. The scope is conceptual definition, and results include descriptions of social, legal and ethical requirements, an architecture, services and connectivity infrastructures for consumer-oriented healthcare systems linking co-existing healthcare systems and consumer devices. We conclude with recommendations based on an analysis of research challenges related to how to process the data securely and anonymously and how to interconnect participants and services with different standards and interaction protocols, and devices with heterogeneous hardware and software configurations.
['Geir Horn', 'Frank Eliassen', 'Amir Taherkordi', 'Salvatore Venticinque', 'Beniamino Di Martino', 'Monika Bucher', 'Lisa Wood']
An architecture for using commodity devices and smart phones in health systems
869,294
In this work, we investigate the power allocation (PA) problem aimed at minimizing the users' packet error rate (PER) over a noncooperative link, i.e., a link where the set of users, employing packet-oriented bit-interleaved coded (BIC) orthogonal frequency division multiplexing (OFDM) systems, compete for the same bandwidth. For these kind of systems, the PER is not available in closed-form, but a very efficient solution is offered by the effective SNR mapping (ESM) technique. This method allows each user to evaluate a single scalar value, the effective SNR (ESNR), accounting for all the SNIRs experienced over the active subcarriers, and to univocally map it into a PER value. Thus, in order to derive a decentralized strategy allowing each user to minimize its own PER, the problem is described as a strategic game, called min-PER game, with the set of player, utilities and strategies represented by the competitive users, the ESNRs and the set of feasible power allocations, respectively. We will show both the existence of at least one Nash Equilibrium (NE) for the min-PER game and its equivalence with a Nonlinear Variational Inequality (NVI) problem. Finally, relying on the theory of contraction mappings, we will derive a distributed algorithm to reach the NE of the game.
['Riccardo Andreotti', 'Vincenzo Lottici', 'Filippo Giannetti', 'Ivan Stupia', 'Luc Vandendorpe']
A game theoretical approach for reliable packet transmission in noncooperative BIC-OFDM systems
400,104
We introduce theorems that enable efficient identification of indistinguishable fault pairs in synchronous sequential circuits using an iterative logic array of limited length. These theorems can be used for identifying fault pairs that can be dropped from. consideration before diagnostic ATPG starts, thus improving the efficiency of diagnostic ATPG. Experimental results are presented to demonstrate the effectiveness of the proposed theorems, which allow us to identify almost all the indistinguishable fault pairs in finite-state machine benchmarks.
['M. Enamul Amyeen', 'Irith Pomeranz', 'W. Kent Fuchs']
Theorems for efficient identification of indistinguishable fault pairs in synchronous sequential circuits
171,262
Emotion Corpus Construction on Microblog Text
['Lei Huang', 'Shoushan Li', 'Guodong Zhou']
Emotion Corpus Construction on Microblog Text
764,336
Statistical Modeling with the Virtual Source MOSFET Model
['Li Yu', 'Lan Wei', 'Ibrahim M. Elfadel', 'Dimitri A. Antoniadis', 'Duane S. Boning']
Statistical Modeling with the Virtual Source MOSFET Model
545,288
Dual-polarization radar observations of wintertime thunderclouds for the Sea of Japan are presented. The range-height-indicator (RHI) and plan-position-indicator (PPI) scans, respectively, reveal the height and horizontal distributions of ice particles, such as graupel and ice crystals. The overall shape of these ice particles is also confirmed on the ground. The ice crystals are, in general, found at high altitude near the cloud top, whereas the graupel is primarily seen near the center of clouds. The PPI display indicates a bandlike horizontal structure, and the lightning tends to occur around the bandlike gap where the ice crystals are, in advance, accumulated on the windward side of the preceding cloud. Simultaneous field mill observations indicate electric charge separation between these ice particles precipitating from the thunderclouds. >
['Yasuyuki Maekawa', 'Shoichiro Fukao', 'Yasuo Sonoi', 'Fumio Yoshino']
Dual polarization radar observations of anomalous wintertime thunderclouds in Japan
499,058
In this paper, a flexible middleware system for omni-directional video transmission for various teleconferencing applications is introduced. The omni-directional image has more advantages in that it provides a wider view than a single directional camera and able to realize flexible TV conferencing even between remotely separated small rooms. In this paper, we describe system architecture and functions of the middleware for high-definition omni-directional image control and effective video transmission system using DV and HDV (1080i format)[4]. The MidField system which we have developed so far is used for video stream transmission over IPv4/IPv6 networks. The prototype system of a TV conference is constructed to evaluate our suggested high-definition omni-directional middleware. Through the functional and performance evaluation of the prototyped system, we could verify the usefulness of our proposed system.
['Yosuke Sato', 'Yuya Maita', 'Koji Hashimoto', 'Yoshitaka Shibata']
A New Teleconference System and Its Applications by Omni-directional Audio and Video Transmission
146,259
Objective: The goal of this study was to evaluate two kinds of difficulty adaptation techniques in terms of enjoyment and performance in a simple memory training game: one based on difficulty-performance matching (“task-guided”) and the other based on providing a high degree of control/choice (“user-guided”). Methods: Performance and enjoyment are both critical in making serious games effective. Therefore the adaptations were based on two different approaches that are used to sustain performance and enjoyment in serious games: 1) adapting task difficulty to match user performance by leveraging the theories of zone of proximal development and flow, thus maximizing performance that can then lead to increased enjoyment and 2) providing a high degree of control and choice by using constructs from self-determination theory, which maximizes enjoyment, that can potentially increase performance. 24 participants played a simple memory training serious game in a fully randomized, repeated measures design. The primary outcome measures were enjoyment and performance. Results: Enjoyment was significantly greater in user-guided (p < 0.05), whereas performance was significantly greater in task-guided (p < 0.05). Conclusion: The results suggest that a trade-off between maximizing performance and maximizing enjoyment could be achieved by combining the two approaches into a “hybrid” adaptation mode that gives users a high degree of control in setting difficulty, but also advises them about optimizing performance.
['Aniket Nagle', 'Domen Novak', 'Peter Wolf', 'Robert Riener']
The effect of different difficulty adaptation strategies on enjoyment and performance in a serious game for memory training
863,772
Business Intelligence System in Banking Industry Case Study of Samam Bank of Iran
['Maryam Marefati', 'Seyyed Mohsen Hashemi']
Business Intelligence System in Banking Industry Case Study of Samam Bank of Iran
569,249
A Simple Human Activity Recognition Technique Using DCT
['Aziz Khelalef', 'Fakhreddine Ababsa', 'Nabil Benoudjit']
A Simple Human Activity Recognition Technique Using DCT
917,569
Gold-standard for Topic-specific Sentiment Analysis of Economic Texts
['Pyry Takala', 'Pekka Malo', 'Ankur Sinha', 'Oskar Ahlgren']
Gold-standard for Topic-specific Sentiment Analysis of Economic Texts
613,029
Towards Efficient HPSG Generation for German, a Non-Configurational Language
['Berthold Crysmann', 'Woodley Packard']
Towards Efficient HPSG Generation for German, a Non-Configurational Language
616,178
Close proximity of mobile devices can be utilized to create ad hoc and dynamic networks. These mobile Proximity Based Networks (PBNs) are Opportunistic Networks that enable devices to identify and communicate with each other without relying on any communication infrastructure. In addition, these networks are self organizing, highly dynamic and facilitate effective real-time communication. These characteristics render them very useful in a wide variety of complex scenarios such as vehicular communication, e-health, disaster networks, mobile social networks etc. In this work we employ the AllJoyn framework from Qualcomm which facilitates smooth discovery, attachment and data sharing between devices in close proximity. We develop Min-O-Mee, a Minutes-of-Meeting app prototype in the Android platform, utilizing the AllJoyn framework. Min-O-Mee allows one of the participants to create a minutes-of-meeting document which can be shared with and edited by the other participants in the meeting. The app harnesses the spatial proximity of participants in a meeting and enables seamless data exchange between them. This characteristic allows Min-O-Mee to share not just minutes-of-meeting, but any data that needs to be exchanged among the participants, making it a versatile app. Further, we extend the basic AllJoyn framework to enable multi-hop communication among the devices in the PBN. We devise a novel routing mechanism that is suited to a proximity centric wireless network as it facilitates data routing and delivery over several hops to devices that are at the fringe of the PBN.
['Hatim Lokhandwala', 'Srikant Manas Kala', 'Bheemarjuna Reddy Tamma']
Min-O-Mee: A Proximity Based Network application leveraging the AllJoyn framework
658,943
In this paper, we propose a method for activity recognition, which can estimate new activities which does not appear in the training data, combining word vectors constructed from semantic word vectors and Twitter timestamps. Because traditional activity recognition utilizes supervised machine learning, unknown activity classes which do not appear in the training dataset is unable to be estimated. For this problem, zero-shot machine learning method is proposed, but it requires the preparation of semantic codes. As semantic codes, we utilize word vectors constructed from semantic word vectors and Twitter timestamps. To evaluate the proposed method, we evaluated whether we could estimate unknown activity classes, with the sensor data set collected from 20 households for 4 months, along with the user-generated labels using the web system which can estimate, modify, and add new activity types. As a result, the proposed method could even estimate unknown activity classes. Moreover, by utilizing Twitter timestamps and semantic word vectors from the Japanese Wikipedia in word vectors, the method could estimate 9 unknown activity classes.
['Moe Matsuki', 'Sozo Inoue']
Recognizing unknown activities using semantic word vectors and twitter timestamps
887,742
New York University KBP 2010 Slot-Filling System.
['Ralph Grishman', 'Bonan Min']
New York University KBP 2010 Slot-Filling System.
808,668
Distance Education for the minimally invasive method to monitor the intracranial pressure.
['Lilian Regina de Carvalho', 'Ursula Marcondes Westin', 'Silvia Helena Zem-Mascarenhas']
Distance Education for the minimally invasive method to monitor the intracranial pressure.
738,262
Background: Given information on just a few prior projects, how do we learn the best and fewest changes for current projects? Aim: To conduct a case study comparing two ways to recommend project changes. 1) Data farmers use Monte Carlo sampling to survey and summarize the space of possible outcomes. 2) Case-based reasoners (CBR) explore the neighborhood around test instances. Method: We applied a state-of-the data farmer (SEESAW) and a CBR tool ()'V2) to software project data. Results: CBR with )'V2 was more effective than SEESAW's data farming for learning best and recommended project changes, effectively reducing runtime, effort, and defects. Further, CBR with )'V2 was comparably easier to build, maintain, and apply in novel domains, especially on noisy data sets. Conclusion: Use CBR tools like )'V2 when data are scarce or noisy or when project data cannot be expressed in the required form of a data farmer. Future Work: This study applied our own CBR tool to several small data sets. Future work could apply other CBR tools and data farmers to other data (perhaps to explore other goals such as, say, minimizing maintenance effort).
['Tim Menzies', 'Adam Brady', 'Jacky Keung', 'Jairus Hihn', 'Steve Williams', 'Oussama El-Rawas', 'Phillip Green', 'Barry W. Boehm']
Learning Project Management Decisions: A Case Study with Case-Based Reasoning versus Data Farming
64,703
Recent experimental research in coherent detection has enabled 100G (or higher bit-rate) optical receivers to switch between wavelengths in less than a hundred nanoseconds. Such technologies enable a novel variant of the Time-domain Wavelength Interleaved Networks (TWIN) architecture in which fast tunable receivers replace tunable transmitters as the main switching elements in the otherwise passive optical network. Similar to TWIN, this architecture enables efficient sharing of 100G (or higher) wavelength rates among multiple destinations in metro networks or data centers where individual node-pairs may not need the full capacity of each wavelength. In this paper, we present the key elements of this novel variant of TWIN, discuss framing and scheduling efficiency, sub- and super-framing for TDM and packet data, as well as protection mechanisms. We also present the benefit of this approach relative to other optical network technologies. We conclude with an overview of the potential applications of this novel optical networking architecture.
['Giorgio Cazzaniga', 'Christian Hermsmeyer', 'Iraj Saniee', 'Indra Widjaja']
A new perspective on burst-switched optical networks
247,302
As one of the measures to realize large-scale renewable energy penetration, utilization of BESS (battery energy storage system) in LFC (load frequency control) is expected. In this paper, the control performance of a LFC system to make effective use of existing LFC thermal unit and BESS is verified by utilizing Miyako Island field test facility. The field test results show that the proposed LFC system can more suppress frequency deviation than LFC using only existing LFC unit and can significantly reduce the required capacity of BESS compared with LFC using only BESS.
['Hiroyuki Amano', 'Wataru Shima', 'Tomonori Kawakami', 'Toshio Inoue', 'Yasushi Uehara', 'Hirofumi Nakama', 'Yuji Oshiro', 'Masayoshi Toguchi']
Field verification of control performance of a LFC system to make effective use of existing power generation and battery energy storage system
911,141
'I'his paDer proposes a new idea of' a stereo matching technique based on the minimal potential energy criterion. The method is concerned with the Plarr's human stereopsis model. According to his model, the real 3-D vorld 1s recognized by internolat,ing identified disparit,ies. We think. however, matching and interpolation should be done simultaneously. Our method realizes this by use of the Green's function, a'nd is indicative of human and machine matching strategic similarity. It is shown that the potential enerey minimum criterion is reduced t.o minimizing a quadratic form, and minimization process can be Localized. The method consists of two parts; 1) Picki-ng up disparity candidates using a sl~pport function. 2) Selection of true disnarities by searching for a surface passing through disparities and being the flattest. time. P'ro~n our irlst,inct it seems more natural and rompat ihle to the actual human visual sl-st,em than the "larr's sequential model. Our method consists of two parts. The former half is t,o pick up disparity rar~didates for each edge hv ttse of a sup~ort function 121, and the latter half is t,o select true dispariti~s hv search for the flattest (notent ial energy minim~lrn J surface ~thich nassps thro~rgh one disparity candidate at each edge. This naper first. t.reat.s the latter half for the convenience of discussion.
['Susumu Hattori', 'Shunji Murai', 'Hitoshi Ohtani']
A STEREO MATCHING METHOD USING THE GREEN'S FUNCTION
560,309
Metrics for the analysis of multiplex networks.
['Federico Battiston', 'Vincenzo Nicosia', 'Vito Latora']
Metrics for the analysis of multiplex networks.
776,520
A postprocessing method for the correction of visual demosaicking artifacts is introduced. The restored, full-color images previously obtained by cost-effective color filter array interpolators are processed to improve their visual quality. Based on a localized color ratio model and the original underlying Bayer pattern structure, the proposed solution impressively removes false colors while maintaining image sharpness. At the same time, it yields excellent improvements in terms of objective image quality measures.
['Rastislav Lukac', 'K. Martin', 'Konstantinos N. Plataniotis']
Demosaicked image postprocessing using local color ratios
345,183
A 3D architecture with DRAM memory stacked on a multi-core processor has many benefits for the embedded system. Compared with a conventional 2D design, it reduces memory access latency, increases memory bandwidth and reduces energy consumption. However it poses a thermal challenge as the heat generated by the processor cannot dissipate efficiently through the DRAM memory layer. Due to the fact that DRAM is very sensitive to high temperature as well as temperature variance, 3D stacking causes more failures to occur because DRAM thermal variance is higher than the conventional 2D architecture. To address this thermal challenge we propose to reduce temperature variance and peak temperature of a 3D multi-core processor and stacked DRAM by thermally aware thread migration among processor cores. This method has very limited impact on processor performance. Using migration-based policy we reduce peak steady-state temperature in the processor by up to 8.3 degrees Celsius, with the average of 4.7 degrees.
['Dali Zhao', 'Houman Homayoun', 'Alexander V. Veidenbaum']
Temperature aware thread migration in 3D architecture with stacked DRAM
7,707
We address novel two-handed interaction techniques in dual display interactive workspaces combining direct and indirect touch input. In particular, we introduce the notion of a horizontal tool space with task-dependent graphical input areas. These input areas are designed as single purpose control elements for specific functions and allow users to manipulate objects displayed on a vertical screen using simple one- and two-finger touch gestures and both hands. For demonstrating this concept, we use 3D modeling tasks as a specific application area. Initial feedback of six expert users indicates that our techniques are easy to use and stimulate exploration rather than precise modeling. Further, we gathered qualitative feedback during a multi-session observational study with five novices who learned to use our tool and were interviewed several times. Preliminary results indicate that working with our setup is easy to learn and remember. Participants liked the partitioning character of the dual-surface setup and agreed on the benefiting quality of touch input, giving them a 'hands-on feeling'.
['Henri Palleis', 'Julie Wagner', 'Heinrich Hussmann']
Novel Indirect Touch Input Techniques Applied to Finger-Forming 3D Models
777,810
An extractor is a function which extracts (almost) truly random bits from a weak random source, using a small number of additional random bits as a catalyst. We present a general method to reduce the error of any extractor. Our method works particularly well in the case that the original extractor extracts up to a constant function of the source min-entropy and achieves a polynomially small error. In that case, we are able to reduce the error to (almost) any /spl epsiv/, using only O(log(1//spl epsiv/)) additional truly random bits (while keeping the other parameters of the original extractor more or less the same). In other cases (e.g. when the original extractor extracts all the min-entropy or achieves only a constant error), our method is not optimal but it is still quite efficient and leads to improved constructions of extractors. Using our method, we are able to improve almost all known extractors in the case where the error required is relatively small (e.g. less than a polynomially small error). In particular, we apply our method to the new extractors of L. Trevisan (1999) and R. Raz et al. (1999) to obtain improved constructions in almost all cases. Specifically, we obtain extractors that work for sources of any min-entropy on strings of length n which (a) extract any 1/n/sup /spl gamma// fraction of the min-entropy using O[log n+log(1//spl epsiv/)] truly random bits (for any /spl gamma/>0), (b) extract any constant fraction of the min-entropy using O[log/sup 2/n+log(1//spl epsiv/)] truly random bits, and (c) extract all the min-entropy using O[log/sup 3/n+log n/spl middot/log(1//spl epsiv/)] truly random bits.
['Ran Raz', 'Omer Reingold', 'Salil P. Vadhan']
Error reduction for extractors
122,228
This paper addresses the problem of semantic heterogeneity between data representations with particular emphasis on CAD tool data representations. The combination of powerful mapping operations and a flexible procedural interface are proposed as a possible solution to this problem. A practical application of the inter-operation of data representations is used to illustrate the techniques. The data representations used are the ICL COT data format and the TRACKER data format.
['Zahir Moosa', 'Nick Filer', 'Michael Brown', 'John Heaton', 'John Pye']
Practical inter-operation of CAD tools using a flexible procedural interface
61,201
A distributed access control module in wireless sensor networks (WSNs) allows the network to authorize and grant user access privileges for in-network data access. Prior research mainly focuses on designing such access control modules for WSNs, but little attention has been paid to protect user's identity privacy when a user is verified by the network for data accesses. Often, a user does not want the WSN to associate his identity to the data he requests, particularly in a single-owner multi-user WSN. In this paper, we present the design, implementation, and evaluation of a novel approach, Priccess, to ensure privacy-preserving access control. In addition to the theoretical analysis that demonstrates the security properties of Priccess, this paper also reports the experimental results of Priccess in a network of Imote2 motes, which show the efficiency of Priccess in practice.
['Daojing He', 'Jiajun Bu', 'Sencun Zhu', 'Mingjian Yin', 'Yi Gao', 'Haodong Wang', 'Sammy Chan', 'Chun Chen']
Distributed privacy-preserving access control in a single-owner multi-user sensor network
460,425
Faculty of Information Technology, Ho Chi Minh City University of Technology, Ho Chi Minh, Viet Nam;Faculty of Information Technology, Saigon University, Ho Chi Minh, Viet Nam; Faculty of InformationTechnology, University of Science, Ho Chi Minh, Viet Nam Abstract: The utility based on itemsets mining approach has been discussed widely in recent years. There aremany algorithms mining high utility itemsets (HUIs) by pruning candidates based on the estimated utilityvalues, and based on the transactionweighted utilization values. These algorithms aim to reduce searchspace. In this paper, we propose a method for HUIs from vertical distributed databases. This method doesnot integrate local databases in SlaverSites to MasterSite, and scan local database one time. Experimentsshow the run-time of this method is more efficient than that in the concentration database. © 2009 IEEE. Author Keywords: Concentration database; Utility constraint; Utility itemset; Vertical distributed databases;WIT-tree Index Keywords: Concentration database; Utility constraint; Utility itemset; Vertical distributed databases;WIT-tree; Computer science; Database systems Year: 2009 Source title: 2009 IEEE-RIVF International Conference on Computing and Communication Technologies:Research, Innovation and Vision for the Future, RIVF 2009 Art. No.: 5174650 Link: Scorpus Link Correspondence Address: Vo, B.; Faculty of Information Technology, Ho Chi Minh City University ofTechnology, Ho Chi Minh, Viet Nam; email: vdbay@hcmhutech.edu.vn Conference name: 2009 IEEE-RIVF International Conference on Computing and CommunicationTechnologies: Research, Innovation and Vision for the Future, RIVF 2009 Conference date: 13 July 2009 through 17 July 2009 Conference location: Danang City Conference code: 78379 ISBN: 9.78142E+12 DOI: 10.1109/RIVF.2009.5174650 Language of Original Document: English Abbreviated Source Title: 2009 IEEE-RIVF International Conference on Computing and CommunicationTechnologies: Research, Innovation and Vision for the Future, RIVF 2009 Document Type: Conference Paper Source: Scopus Authors with affiliations:
['Bay Vo', 'Huy Nguyen', 'Bac Le']
Mining High Utility Itemsets from Vertical Distributed Databases
380,243
This paper presents a new image analysis method, combining motion estimation and image segmentation. Whereas none of these methods, used on its own, delivers error free meta-information, an appropriate combination leads to an orthogonalization of these errors. This method is applied to improve the quality of motion vector fields.
['Holger Blume', 'G. Herczeg', 'Oliver Erdler', 'Tobias G. Noll']
Object based refinement of motion vector fields applying probabilistic homogenization rules
467,427
This paper describes a real-time, strip-based, low-complexity document page classification algorithm, which can be used as a copy mode selector in the copy pipeline. The benefits of such a copy mode selector include improving copy quality, simplifying user interaction, and increasing copy rate.
['Xiaogang Dong', 'Peter Majewicz', 'Gordon McNutt', 'Charles A. Bouman', 'Jan P. Allebach', 'Ilya Pollak']
A Document Page Classification Algorithm in Copy Pipeline
439,708
Duality for Multiobjective Semidefinite Optimization Problems
['Sorin-Mihai Grad']
Duality for Multiobjective Semidefinite Optimization Problems
823,295
In this paper, we describe how language generation and speech synthesis for spoken dialog systems can be efficiently integrated under a weighted finite state transducer architecture. Taking advantage of this efficiency, we show that introducing flexible targets in generation leads to more natural sounding synthesis. Specifically, we allow multiple wordings of the response and multiple prosodic realizations of the different wordings. The choice of wording and prosodic structure are then jointly optimized with unit selection for waveform generation in speech synthesis. Results of perceptual experiments show that by integrating the steps of language generation and speech synthesis, we are able to achieve improved naturalness of synthetic speech compared to the sequential implementation.
['Ivan Bulyko', 'Mari Ostendorf']
Efficient integrated response generation from multiple targets using weighted finite state transducers
92,986
We study a hydrodynamic limit approach to move-to-front rules, namely, a scaling limit as the number of items tends to infinity, of the joint distribution of jump rate and position of items. As an application of the limit formula, we present asymptotic formulas on search cost probability distributions, applicable for general jump rate distributions.
['Kumiko Hattori', 'Tetsuya Hattori']
Hydrodynamic limit of move-to-front rules and search cost probabilities
263,254
Frequency-hopping (FH) spread spectrum and direct-sequence spread spectrum are two main spread-coding technologies. Frequency-hopping sequences are needed in FH code-division multiple-access (CDMA) systems. In this correspondence, three classes of optimal frequency-hopping sequences are constructed with algebraic methods. The three classes are based on perfect nonlinear functions, power functions, and norm functions, respectively. Both individual optimal frequency-hopping sequences and optimal families of frequency-hopping sequences are presented.
['Cunsheng Ding', 'Marko J. Moisio', 'Jin Yuan']
Algebraic Constructions of Optimal Frequency-Hopping Sequences
226,443
The slip rate of a seismogenic fault is a crucial parameter for establishing the contribution of the fault to the seismic hazard. It is calculated from measurements of the offset of linear landforms, such channels, produced by the fault combined with their age. The three-dimensional measurement of offset in buried paleochannels is subject to uncertainties that need to be quantitatively assessed and propagated into the slip rate. Here, we present a set of adapted scripts to calculate the net, lateral and vertical tectonic offset components caused by faults, together with their associated uncertainties. This technique is applied here to a buried channel identified in the stratigraphic record during a paleoseismological study at the El Saltador site (Alhama de Murcia fault, Iberian Peninsula). After defining and measuring the coordinates of the key points of a buried channel in the walls of eight trenches excavated parallel to the fault, we (a) adjusted a 3D straight line to these points and then extrapolated the tendency of this line onto a simplified fault plane; (b) repeated these two steps for the segment of the channel in the other side of the fault; and (c) measured the distance between the two resulting intersection points with the fault plane. In doing so, we avoided the near fault modification of the channel trace and obtained a three-dimensional measurement of offset and its uncertainty. This methodology is a substantial modification of previous procedures that require excavating progressively towards the fault, leading to possible underestimation of offset due to diffuse deformation near the fault. Combining the offset with numerical dating of the buried channel via U-series on soil carbonate, we calculated a maximum estimate of the net slip rate and its vertical and lateral components for the Alhama de Murcia fault. The proposed methodologic approach modifies 3D paleoseismologic existing techniques.The offset of a buried channel is measured projecting its tendency into the fault.The calculated maximum net slip rate for Alhama de Murcia fault is 1.3 +0.2/-0.1mm/yr.
['Marta Ferrater', 'Anna Echeverria', 'Eulàlia Masana', 'J. J. Martínez-Díaz', 'Warren D. Sharp']
A 3D measurement of the offset in paleoseismological studies
650,427
In this paper, we have designed a novel touchscreen interaction technique for light-weight navigation: Bezel-Flipper. The design specifics and initial prototype application are developed with user evaluation. We received overall positive feedback from our initial user study in terms of engagement and enjoyment.
['Sangtae Kim', 'Jaejeung Kim', 'Soobin Lee']
Bezel-flipper: design of a light-weight flipping interface for e-books
283,841
Wireless communication depending on satellite systems constructed by numerous low earth orbit satellites has presented many advantageous traits in space navigations and detections, and has gradually become a dominant developing trend of the mission-driven telecommunication. Based on Flower Constellation design theory, a design method of low earth orbit micro-satellite formations, which are capable of providing compact topology characterized by specific geometrical configuration, is presented, aiming at optimizing satellite systems' multiple regional-coverage performance with high robustness and reliability. With the assistant of AGI Satellite Tool Kit software package, a satellite formation system with circular topology is constructed, where orbit parameters of all the satellites constructing the formation are specified for further analysis. Based on this, feasible solution to deployment of permanent inter-satellite link is proposed and characteristics of its geometrical parameters are investigated. The multiple regional-coverage performance of the proposed formation is then analyzed based on the theory of apogee orientation. Simulation results indicate that the formation system proposed in this paper can provide more favorable satellite-to-satellite access capability and greater implementation possibility in low earth orbit satellite networking for robust regional-coverage compared with the traditional global-coverage satellite constellations.
['Bingcai Chen', 'Zhinan Li', 'Xiaodong Yang']
Regional coverage scheme based on LEO microsatellite formation
444,223
In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.
['Huaqing Li', 'Guo Chen', 'Tingwen Huang', 'Zhaoyang Dong', 'Wei Zhu', 'Lan Gao']
Event-Triggered Distributed Average Consensus Over Directed Digital Networks With Limited Communication Bandwidth
829,569
Surface Reconstruction of Renal Corpuscle from Microscope Renal Biopsy Image Sequence
['Jun Zhang', 'Jinglu Hu']
Surface Reconstruction of Renal Corpuscle from Microscope Renal Biopsy Image Sequence
952,830
In this paper, we aim to solve the problem of estimating complete dense depth maps from a monocular moving camera. By 'complete', we mean depth information is estimated for every pixel and detailed reconstruction is achieved. Although this problem has previously been attempted, the accuracy of complete dense depth reconstruction is a remaining problem. We propose a novel system which produces accurate complete dense depth map. The new system consists of two subsystems running in separated threads, namely, dense mapping and sparse patch-based tracking. For dense mapping, a new projection error computation method is proposed to enhance the gradient component in estimated depth maps. For tracking, a new sparse patch-based tracking method estimates camera pose by minimizing a normalized error term. The experiments demonstrate that the proposed method obtains improved performance in terms of completeness and accuracy compared to three state-of the-art dense reconstruction methods VSFM+CMVC, LSDSLAM and REMODE.
['Xiaoshui Huang', 'Lixin Fan', 'Jian Zhang', 'Qiang Wu', 'Chun Yuan']
Real Time Complete Dense Depth Reconstruction for a Monocular Camera
854,490
Analysis of gold nanoparticles as carriers for different molecular dye type photosensitizers in Photodynamic Therapy applied to carcinomas
['I. Salas-García', 'F. Fanjul-Vélez', 'José Luis Arce-Diego']
Analysis of gold nanoparticles as carriers for different molecular dye type photosensitizers in Photodynamic Therapy applied to carcinomas
674,169
Since 90's, several new approaches providing solutions to integrated or federated systems have been defined. In particular, with respect to geographic information systems, proposals of integration did not take long time to appear. However, applicability of many of those approaches is still unlikely. In this context, we introduce an ontology-based approach - the GeoMergeP system - aiming at improving the capabilities of the integration process. In this paper we propose a methodology composed of two main process, semantic enrichment (by adding information about the ISO 19100 standard) and merging. The last one performs some tasks automatically and guides the user in performing other tasks for which his/her intervention is required. Finally, a plugin of the ontology editor, Protege, is presented showing how the method is implemented through a case study.
['Agustina Buccella', 'Laura Perez', 'Alejandra Cechich']
GeoMergeP: Supporting an Ontological Approach to Geographic Information Integration
400,944
Now that innovation in networked structures (Living Labs is an example of these) has become the emerging paradigm, the need for a holistic approach towards collaborative issues is restrained by the fragmentation of research into innovation management. Partial solutions caused by this fragmentation will only increase the number of conflicting issues that need resolving. Thereto, this paper offers seven perspectives for addressing innovation in networked structures, especially by evaluating the perspectives against the particular characteristics of Living Labs. None of these perspectives, based on a literature review and supported by empirical evidence, offers a complete picture of innovation that crosses the organisational boundaries of the monolithic firm. To understand and solve challenges related to Living Labs, a more integrated and comprehensive view becomes necessary based on the integration of the individual perspectives. In addition, a research agenda for Living Labs follows from the issues emerging from the review of the perspectives.
['Rob Dekkers']
Perspectives on Living Labs as innovation networks
78,310
Probe design is the most important step for any microarray based assay. Accurate and efficient probe design and selection for the target sequence is critical in generating reliable and useful results. Several different approaches for probe design are reported in literature and an increasing number of bioinformatics tools are available for the same. However, based on the reported low accuracy, determining the hybridization efficiency of the probes is still a big computational challenge. Present study deals with the extraction of various novel features related to sequence composition, thermodynamics and secondary structure that may be essential for designing good probes. A feature selection method has been used to assess the relative importance of all these features. In this paper, we validate the importance of various features currently used for designing an oligonucleotide probe. Finally, a classification methodology is presented that can be used to predict the hybridization quality of a probe.
['Lalit Gupta', 'Sunil Kumar', 'Randeep Singh', 'Rafi Shaik', 'Nevenka Dimitrova', 'Aparna Gorthi', 'B. Lakshmi', 'Deepa Pai', 'Sitharthan Kamalakaran', 'Xiaoyue Zhao', 'Michael Wigler']
Classification method for microarray probe selection using sequence, thermodynamics and secondary structure parameters
364,238
Boolean Functions
['Yves Crama', 'Peter L. Hammer']
Boolean Functions
716,706
Glottal closure instants (GCI) are the instants of significant excitation of the vocal tract system during the production of speech. The performance of many speech analysis and synthesis approaches depends on accurate estimation of GCIs. A new method for detection of GCI locations in speech utterances using Fourier-Bessel (FB) expansion and amplitude and frequency modulated (AM-FM) signal model is proposed in this paper. The inherent filtering property of the FB expansion is used to weaken the effect of formants in the speech utterances. The band-limited signal reconstructed using FB coefficients has been considered as an AM-FM signal. The discrete-energy separation algorithm (DESA) has been used to estimate the amplitude envelope (AE) function of the AM-FM signal model. The estimated AE function is explored for detection of GCIs. The CMU-Arctic database is used in this work for the validation of the proposed method for GCI detection with the electro-glottograph (EGG) as a reference. It has been observed that the detected GCI by the proposed method seems to provide accurate location of GCIs.
['Ram Bilas Pachori', 'Suryakanth V. Gangashetty']
AM-FM model based approach for detection of glottal closure instants
124,841
Mosquito-borne diseases cause considerable human mortality and morbidity around the globe, particularly in tropical countries. Mosquito-borne diseases such as dengue, malaria and yellow fever exhibit a distinct seasonal pattern. This leads to the hypothesis that episodes of mosquito-borne diseases are related to climatic conditions. It is observed that global atmospheric temperature has increased significantly over the last decades due to global warming, leading to a concern that mosquito-borne diseases may become more prevalent with the increase in temperature and precipitation. The changes in climate are expected to disrupt seasonal weather patterns, to increase the incidence rate of these diseases and to cause the disease to spread over a broader geographical range. This paper investigates the potential effect of climate change on mosquito population distribution and on the incidence rate of dengue fever.
['Tan Kah Bee', 'Koh Hock Lye', 'Teh Su Yean']
Modeling Dengue Fever Subject to Temperature Change
235,556
Hybrid automatic repeat request (HARQ) protocol typically operates in multi-process stop-and-wait (SAW) mode to fully utilize channel capacity. A prior packet, due to retransmission(s), may arrive at the receiving side later than the packets that follow it but use other HARQ processes, thus leading to out-of-order reception. Medium access control (MAC) layer is responsible for packet reordering operations so as to offer in- sequence delivery to upper layer. An arriving packet should be temporarily stored in the reordering buffer, if there is at least one prior packet that has not arrived yet. Nevertheless, protocol stalling may occur if a packet is discarded at the transmitter due to excessive failed retransmissions, or if a negative acknowledgement (NACK) signal is inversed as an acknowledgement (ACK). A timer-based mechanism is suggested by 3GPP to handle protocol stalling. In this paper, we propose an adaptive timer- based stalling avoidance mechanism as the enhancement to the current 3GPP specifications. In our proposal, the duration of the timer is set or updated through dynamic monitoring on the status of each HARQ process. We show with illustrative examples and simulations that the proposed mechanism reduces considerably extra packet delay incurred during reordering operations.
['Yong Li', 'Wenbo Wang', 'Jiangfeng Ji', 'Mugen Peng']
On the Enhancement to Timer-Based Stalling Avoidance Mechanism in HARQ Protocols
261,776
The majority of research in evolvable hardware is focused on evolving logic for deployment on reconfigurable hardware. There are far fewer reports concerned with the implementation of evolutionary algorithms (EAs) in hardware. The focus of our research is directed toward using reconfigurable hardware as a means to speed up evolutionary search, and in particular evolutionary multiobjective optimization (EMO). Evolutionary multiobjective optimization utilizes an evolutionary search to find solutions to difficult multiobjective optimization problems. We present an implementation of an EMO algorithm in reconfigurable hardware, and discuss how it may be utilized in practical deployment situations
['Stefano R. Bonissone', 'Raj Subbu']
Evolutionary Multiobjective Optimization on a Chip
61,356
As more and more organizations move from functional to process-based IT infrastructure, ERP systems are becoming one of today's most widespread IT solutions. However, not all firms have been successful in their ERP implementations. Using a case study methodology grounded in business process change theory, this research tries to understand the factors that lead to the success or failure of ERP projects. The results from our comparative case study of 4 firms that implemented an ERP system suggest that a cautious, evolutionary, bureaucratic implementation process backed with careful change management, network relationships, and cultural readiness have a positive impact on several ERP implementations. Understanding such effects will enable managers to be more proactive and better prepared for ERP implementation. Managerial implications of the findings and future research directions are discussed.
['Jaideep Motwani', 'Ram Subramanian', 'Pradeep Gopalakrishna']
Critical factors for successful ERP implementation: exploratory findings from four case studies
508,423
The knowledge of multiple conformational states is a prerequisite to understand the function of membrane transport proteins. Unfortunately, the determination of detailed atomic structures for all these functionally important conformational states with conventional high-resolution approaches is often difficult and unsuccessful. In some cases, biophysical and biochemical approaches can provide important complementary structural information that can be exploited with the help of advanced computational methods to derive structural models of specific conformational states. In particular, functional and spectroscopic measurements in combination with site-directed mutations constitute one important source of information to obtain these mixed-resolution structural models. A very common problem with this strategy, however, is the difficulty to simultaneously integrate all the information from multiple independent experiments involving different mutations or chemical labels to derive a unique structural model consistent with the data. To resolve this issue, a novel restrained molecular dynamics structural refinement method is developed to simultaneously incorporate multiple experimentally determined constraints (e.g., engineered metal bridges or spin-labels), each treated as an individual molecular fragment with all atomic details. The internal structure of each of the molecular fragments is treated realistically, while there is no interaction between different molecular fragments to avoid unphysical steric clashes. The information from all the molecular fragments is exploited simultaneously to constrain the backbone to refine a three-dimensional model of the conformational state of the protein. The method is illustrated by refining the structure of the voltage-sensing domain (VSD) of the Kv1.2 potassium channel in the resting state and by exploring the distance histograms between spin-labels attached to T4 lysozyme. The resulting VSD structures are in good agreement with the consensus model of the resting state VSD and the spin-spin distance histograms from ESR/DEER experiments on T4 lysozyme are accurately reproduced.
['Rong Shen', 'Wei Han', 'Giacomo Fiorin', 'Shahidul M. Islam', 'Klaus Schulten', 'Benoît Roux']
Structural Refinement of Proteins by Restrained Molecular Dynamics Simulations with Non-interacting Molecular Fragments.
488,793
In the last few years, communication between man and virtual world has been made easier with the apparition of smartphones. This paper presents a method allowing a smartphone user to recognize from any viewpoint a 3D model displayed on a screen. The model is selected from a 190 objects database and displayed rotating along the vertical axis. This method can be used to allow interaction with the recognized model on the smartphone or to retrieve information about the object. Curvature scale space based contour recognition and color matching are used to identify the captured object. Evaluation experiments show a recognition success rate of 92% on a one hundred photographs data set.
['Karim Kadar', 'Francois de Sorbier', 'Hideo Saito']
Displayed Object Recognition for Smartphone Interaction
564,927
Reconfigurable hardware (RH) is used in an increasing variety of applications, many of which require support for features commonly found in general purpose systems. In this work we examine some of the challenges faced in integrating RH with general purpose processors and memory systems. We propose a new CPU-RH-memory interface that takes advantage of on-chip caches and uses virtual memory for communication. Additionally we describe the simulator model we developed to evaluate this new architecture. This work shows that an efficient interface can greatly accelerate RH applications, and provides a strong first step toward multiprocessor reconfigurable computing.
['Philip Garcia', 'Katherine Compton']
A Reconfigurable Hardware Interface for a Modern Computing System
443,873
Makespan minimization in restricted assignment (R|p_{ij} in {p_j, infinity}|C_{max}) is a classical problem in the field of machine scheduling. In a landmark paper, [Lenstra, Shmoys, and Tardos, Math. Progr. 1990] gave a 2-approximation algorithm and proved that the problem cannot be approximated within 1.5 unless P=NP. The upper and lower bounds of the problem have been essentially unimproved in the intervening 25 years, despite several remarkable successful attempts in some special cases of the problem recently. #R##N##R##N#In this paper, we consider a special case called graph-balancing with light hyper edges, where heavy jobs can be assigned to at most two machines while light jobs can be assigned to any number of machines. For this case, we present algorithms with approximation ratios strictly better than 2. Specifically, #R##N##R##N#- Two job sizes: Suppose that light jobs have weight w and heavy jobs have weight W, and w < W. We give a 1.5-approximation algorithm (note that the current 1.5 lower bound is established in an even more restrictive setting). Indeed, depending on the specific values of w and W, sometimes our algorithm guarantees sub-1.5 approximation ratios. #R##N##R##N#- Arbitrary job sizes: Suppose that W is the largest given weight, heavy jobs have weights in the range of (beta W, W], where 4/7 <= beta < 1, and light jobs have weights in the range of (0,beta W]. We present a (5/3+beta/3)-approximation algorithm. #R##N##R##N#Our algorithms are purely combinatorial, without the need of solving a linear program as required in most other known approaches.
['Chien-Chung Huang', 'Sebastian Ott']
A Combinatorial Approximation Algorithm for Graph Balancing with Light Hyper Edges
603,746
Demystifying Ontological Classification in Language Engineering
['Colin Atkinson', 'Thomas Kühne']
Demystifying Ontological Classification in Language Engineering
860,735
This paper proposes to apply machine learning techniques to the task of combining outputs of multiple LVCSR models, where, as features of machine learning, information such as the models which output the hypothesized word, its part-of-speech, and its syllable length are useful for improving the word recognition rate. Experimental results show that the combination result outperforms several baselines including model combination by voting such as ROVER in the word recognition rate. Furthermore, unlike model combination by voting, word recognition rate of model combination by machine learning is not damaged even in the case where only the minority of the participating models perform well in the word recognition task. © 2005 Wiley Periodicals, Inc. Syst Comp Jpn, 36(10): 9–15, 2005; Published online in Wiley InterScience (). DOI 10.1002sscj.20340
['Takehito Utsuro', 'Yasuhiro Kodama', 'Tomohiro Watanabe', 'Hiromitsu Nishizaki', 'Seiichi Nakagawa']
Combining outputs of multiple LVCSR models by machine learning
142,743
—Probing attacks are serious threats on integrated circuits. Security products often include a protective layer called shield that acts like a digital fence. In this article, we demonstrate a new shield structure that is cryptographically secure. This shield is based on the newly proposed SIMON lightweight block cipher and independent mesh lines to ensure the security against probing attacks of the hardware located behind the shield. Such structure can be proven secure against state-of-the-art invasive attacks. For the first time in the open literature, we describe a chip designed with a digital shield, and give an extensive report of its cost, in terms of power, metal layer(s) to sacrifice and of logic (including the logic to connect it to the CPU). Also, we explain how "Through Silicon Vias" (TSV) technology can be used for the protection against both frontside and backside probing.
['Jean-Michel Cioranesco', 'Jean-Luc Danger', 'Tarik Graba', 'Sylvain Guilley', 'Yves Mathieu', 'David Naccache', 'Xuan Thuy Ngo']
Cryptographically secure shields
101,223
The shared-buffering architecture is promising to make a large-scale ATM switch with small buffer size. However, there are two important problems, namely, memory-access speed and complex-control implementation. Advanced 0.5 /spl mu/m CMOS technology now makes it possible to integrate a huge amount of memory, and enables us to apply more sophisticated architecture than ever before. We propose the funnel-structured expandable architecture with shared multibuffering and the advanced searchable-address queueing scheme for these two problems. The funnel structure gives a flexible capability to build various sizes of ATM switches which are proportional to the number of LSI chips. The searchable-address queue, in which all the addresses of the stored cells for different output ports are queued in a single-FIFO hardware and the earliest address is found by the search function provided inside the queue, can reduce the total memory capacity drastically, and enables the address queue to be contained inside the LSI chip. This technique also has a great advantage for implementing the multicast and multilevel priority-control functions. A 622 Mbit/s 32/spl times/8 ATM switch LSI chip set, which consists of a BX-LSI and a CX-LSI, is developed using 0.5 /spl mu/m pure CMOS technology. By using four chip sets, a 622 Mbit/s 32/spl times/32 switch can be installed on one board.
['Hideaki Yamanaka', 'Hirotaka Saito', 'H. Kondoh', 'Yasuhito Sasaki', 'Hirotoshi Yamada', 'Munenori Tsuzuki', 'S. Nishio', 'Hiromi Notani', 'Atsushi Iwabu', 'M. Ishiwaki', 'Shigeki Kohama', 'Yoshio Matsuda', 'K. Oshima']
Scalable shared-buffering ATM switch with a versatile searchable queue
507,793
This paper presents the development of an algebraic dynamic multilevel method (ADM) for fully implicit simulations of multiphase flow in homogeneous and heterogeneous porous media. Built on the fine-scale fully implicit (FIM) discrete system, ADM constructs a multilevel FIM system describing the coupled process on a dynamically defined grid of hierarchical nested topology. The multilevel adaptive resolution is determined at each time step on the basis of an error criterion. Once the grid resolution is established, ADM employs sequences of restriction and prolongation operators in order to map the FIM system across the considered resolutions. Several choices can be considered for prolongation (interpolation) operators, e.g., constant, bilinear and multiscale basis functions, all of which form partition of unity. The adaptive multilevel restriction operators, on the other hand, are constructed using a finite-volume scheme. This ensures mass conservation of the ADM solutions, and as such, the stability and accuracy of the simulations with multiphase transport. For several homogeneous and heterogeneous test cases, it is shown that ADM applies only a small fraction of the full FIM fine-scale grid cells in order to provide accurate solutions. The sensitivity of the solutions with respect to the employed fraction of grid cells (determined automatically based on the threshold value of the error criterion) is investigated for all test cases. ADM is a significant step forward in the application of dynamic local grid refinement methods, in the sense that it is algebraic, allows for systematic mapping across different scales, and applicable to heterogeneous test cases without any upscaling of fine-scale high resolution quantities. It also develops a novel multilevel multiscale method for FIM multiphase flow simulations in natural subsurface formations.
['Matteo Cusini', 'Cor van Kruijsdijk', 'Hadi Hajibeygi']
Algebraic dynamic multilevel (ADM) method for fully implicit simulations of multiphase flow in porous media
685,458
In this paper, we describe CALM, a method for building statistical language models for the Web. CALM addresses several unique challenges dealing with the Web contents. First, CALM does not rely on the whole corpus to be available to build the language model. Instead, we design CALM to progressively adapt itself as Web chunks are made available by the crawler. Second, given the dynamic and dramatic changes in the Web contents, CALM is designed to quickly enrich its lexicon and N-grams as new vocabulary and phrases are discovered. To reduce the amount of heuristics and human interventions typically needed for model adaptation, we derive an information theoretical formula for CALM to facilitate the optimal adaptation in the maximum a posteriori (MAP) sense. Testing against a collection of Web chunks where new vocabulary and phrases are dominant, we show CALM can achieve comparable and satisfactory model measured in perplexity. We also show CALM is robust against over training and the initial condition, suggesting that any assumptions made in obtaining the initial model can gradually see their impacts diminished as CALM runs its full course and adapt to more data.
['Kuansan Wang', 'Xiaolong Li']
Efficacy of a constantly adaptive language modeling technique for web-scale applications
518,868
A Method for Localization of Computational Node and Proxy Server in Educational Data Synchronization
['Süleyman Eken', 'Fidan Kaya', 'Ahmet Sayar', 'Adnan Kavak', 'Suhap Şahin']
A Method for Localization of Computational Node and Proxy Server in Educational Data Synchronization
654,216
This paper describes our recent work on integrating speech recognition and machine translation for improving speech translation performance. Two approaches are applied and their performance are evaluated in the workshop of IWSLT 2005. The first is direct N-best hypothesis translation, and the second, a pseudo-lattice decoding algorithm for translating word lattice, can dramatically reduce computation cost incurred by the first approach. We found in the experiments that both of these approaches could improve speech translation significantly.
['Ruiqiang Zhang', 'Genichiro Kikui', 'Hirofumi Yamamoto']
Using multiple recognition hypotheses to improve speech translation.
246,031
The aim of this paper is to provide a variational interpretation of the nonlinear filter in continuous time. A time-stepping procedure is introduced, consisting of successive minimization problems in the space of probability densities. The weak form of the nonlinear filter is derived via analysis of the first-order optimality conditions for these problems. The derivation shows the nonlinear filter dynamics may be regarded as a gradient flow, or a steepest descent, for a certain energy functional with respect to the Kullback--Leibler divergence. The second part of the paper is concerned with derivation of the feedback particle filter algorithm, based again on the analysis of the first variation. The algorithm is shown to be exact. That is, the posterior distribution of the particle matches exactly the true posterior, provided the filter is initialized with the true prior.
['Richard S. Laugesen', 'Prashant G. Mehta', 'Sean P. Meyn', 'Maxim Raginsky']
Poisson's Equation in Nonlinear Filtering
976,570
Interpolation methods are widely used in geoscience applications to reconstruct multivariate data from irregular samples. This paper describes a quantitative methodology for assessing the performance of various state-of-the-art interpolation methods. The methodology consists of simulation-validation and cross-validation using simulated and real data respectively, and has recently been applied to study the reconstruction of total electron content maps of the ionosphere. These two approaches are described and a study of the various artefacts associated with different interpolation methods also presented, including their origins and typical locations. Finally, the use of the statistical moments of error histograms as a method of evaluating techniques for biases and skew is described, as well as providing confidence bounds on error values. The methodology and artefact analysis should be of use to anyone who uses multivariate interpolation methods.
['Matthew P. Foster', 'Adrian N. Evans']
Performance Evaluation of Multivariate Interpolation Methods for Scattered Data in Geoscience Applications
439,127
To predict network security situations better using expert knowledge and quantitative data, a new forecasting model known as cloud belief rule base (CBRB) model is proposed. The CBRB model utilizes the cloud model to describe the referential point of belief rule, which is more accurate for describing expert knowledge. Moreover, to achieve the optimal parameters of the proposed model, a constraint covariance matrix adaptation evolution strategy (CMA-ES) algorithm is presented in this letter. A case study for network security situation prediction is conducted with CBRB and CMA-ES. The experimental results demonstrate the effectiveness and practicality of the proposed CBRB model.
['Guan-Yu Hu', 'Peili Qiao']
Cloud Belief Rule Base Model for Network Security Situation Prediction
720,976
This paper proposes a dataset, called ALIF, for Arabic embedded text recognition in TV broadcast. The dataset is publicly available for a non-commercial use. It is composed of a large number of manually annotated text images that were extracted from Arabic TV broadcast. It is the first public dataset dedicated to the development and the evaluation of video Arabic OCR techniques. Text images in the dataset are highly variable in terms of text characteristics (fonts, sizes, colors…) and acquisition conditions (background complexity, low resolution, non-uniform luminosity and contrast…). Moreover, an important part of the dataset is finely annotated, i.e. the text in an image is segmented into characters, paws and words, and each segment is labeled. The dataset can hence be used for both segmentation-based and segmentation-free text recognition techniques. In order to illustrate how the ALIF dataset can be used, the results of an evaluation study that we have conducted on different techniques for Arabic text recognition are also presented.
['Sonia Yousfi', 'Sid-Ahmed Berrani', 'Christophe Garcia']
ALIF: A dataset for Arabic embedded text recognition in TV broadcast
396,301
This paper relates the style of 16th century Flemish paintings by Goossen van der Weyden (GvdW) to the style of preliminary sketches or underpaintings made prior to executing the painting. Van der Weyden made underpaintings in markedly different styles for reasons as yet not understood by art historians. The analysis presented here starts from a classification of the underpaintings into four distinct styles by experts in art history. Analysis of the painted surfaces by a combination of wavelet analysis, hidden Markov trees and boosting algorithms can distinguish the four underpainting styles with greater than 90% cross-validation accuracy. On a subsequent blind test this classifier provided insight into the hypothesis by art historians that different patches of the finished painting were executed by different hands.
['Josephine Wolff', 'Maximiliaan Martens', 'Sina Jafarpour', 'Ingrid Daubechies', 'A. Robert Calderbank']
Uncovering elements of style
228,127
We propose a new hybrid wireless geolocation scheme that requires only one observation quantity, namely, the received signal. The attenuation model is explored herein to capture the propagation features from the received signal. Thus, it provides a more accurate approach for wireless geolocation. To investigate geolocation accuracy, we consider the time-of-arrival (ToA) estimation in the presence of path attenuation. The maximum-correlation (MC) estimator is revisited, and the exact maximum-likelihood (ML) estimator is derived to estimate the ToA. The error performance of the ToA estimates is derived using a Taylor expansion. It is shown that the ML estimate is unbiased and has a smaller error variance than the MC estimate. Numerical results illustrate that, for a low effective bandwidth, the ML estimator well outperforms the MC estimator. Afterward, we derive the Cramer-Rao bound (CRB) for the mobile position estimation. The obtained result, which is applicable to any value of path loss exponents, gives a generalized form of the CRB for the ordinary geolocation approach. In seven hexagonal cells, numerical examples show that the accuracy of the mobile position estimation exploring the path loss is improved compared with that obtained by the usual geolocation.
['Bamrung Tau Sieskul', 'Feng Zheng', 'Thomas Kaiser']
A Hybrid SS–ToA Wireless NLoS Geolocation Based on Path Attenuation: ToA Estimation and CRB for Mobile Position Estimation
403,229
ABSTRACTThe growing use of second-screen devices stresses the importance of finding a balance between engagement, distraction, and disturbance of its users, while simultaneously watching television. In this framework, this article reports on a study designed to analyze the efficiency, impact, and interference of different notification strategies aiming to identify the best approach to be used when an alert is needed in second-screen scenarios.A prototype able to deliver synchronized information related with TV content, with intervals of 10, 30 and 60 s, followed by individual or combined notifications (e.g., audio, visual, and haptic—on the tablet and visual—on the TV) was developed. A laboratory adapted to replicate a living room was set up and a test that involved watching three segments of a 20-min clip while using the prototype was carried with 30 participants, under a cognitive walk-through protocol.Quantitative and qualitative results show that receiving notifications while watching TV is effective ...
['Jorge Abreu', 'Pedro Almeida', 'Telmo Silva', 'Mónica Aresta']
Notifications Efficiency, Impact, and Interference in Second-Screen Scenarios
866,991
Word2Vec is a widely used algorithm for extracting low-dimensional vector representations of words. It generated considerable excitement in the machine learning and natural language processing (NLP) communities recently due to its exceptional performance in many NLP applications such as named entity recognition, sentiment analysis, machine translation and question answering. State-of-the-art algorithms including those by Mikolov et al. have been parallelized for multi-core CPU architectures but are based on vector-vector operations that are memory-bandwidth intensive and do not efficiently use computational resources. In this paper, we improve reuse of various data structures in the algorithm through the use of minibatching, hence allowing us to express the problem using matrix multiply operations. We also explore different techniques to distribute word2vec computation across nodes in a compute cluster, and demonstrate good strong scalability up to 32 nodes. In combination, these techniques allow us to scale up the computation near linearly across cores and nodes, and process hundreds of millions of words per second, which is the fastest word2vec implementation to the best of our knowledge.
['Shihao Ji', 'Nadathur Satish', 'Sheng Li', 'Pradeep Dubey']
Parallelizing Word2Vec in Shared and Distributed Memory
710,511
This paper presents a Bayesian learning approach to large margin classifier for hidden Markov model (HMM) based speech recognition. We build the Bayesian large margin HMMs (BLM-HMMs) and improve the model generalization for handling unknown test environments. Using BLM-HMMs, the variational Bayesian HMM parameters are estimated by maximizing lower bound of a marginal likelihood over the uncertainties of HMM parameters. The Bayesian large margin estimation is performed with frame selection mechanism, and is illustrated to meet the objective of support vector machines, i.e. maximal class margin and minimal training errors. The new objective function is not only interpreted as a discriminative criterion, but also feasible to deal with model selection and adaptive training. Experiments on phone recognition show that BLM-HMMs perform better than other generative and discriminative models.
['Jung-Chun Chen', 'Jen-Tzung Chien']
Bayesian large margin hidden Markov models for speech recognition
249,904
In this paper, we examine how software vulnerabilities affect firms that license software and consumers that purchase software. In particular, we model three decisions of the firm: (i) an upfront investment in the quality of the software to reduce potential vulnerabilities; (ii) a policy decision whether to announce vulnerabilities; and (iii) a price for the software. We also model two decisions of the consumer: (i) whether to purchase the software; and (ii) whether to apply a patch.
['Jay-Pil Choi', 'Chaim Fershtman', 'Neil Gandal']
Internet Security, Vulnerability Disclosure and Software Provision
675,531
Microscopy imaging, including fluorescence microscopy and electron microscopy, has a prominent role in life science and medical research. During the past two decades, biological imaging has undergone a revolution by way of the development of new microscopy techniques that allow the visualization of tissues, cells, proteins and macromolecular structures at all levels of resolution, physiological states, chemical composition and dynamic analysis. Thanks to recent advances in optics, digital sensing technologies and labeling probes (i.e., XFP—Colored Fluorescence Protein), we can now visualize subcellular components and organelles at the scale of a few nanometers to several hundreds of nanometers. As a result, fluorescent microscopy has become the workhorse of modern biology. Further technological advances include structured and coherent light sources, faster and more sensitive detectors, smaller and more specific molecular probes and automation processes for image acquisition. Additionally, there is a push towards multimodal imaging in order to gather complementary information such as the concentration of fluorophores at various wavelengths and the refractive index of a given sample (phase imaging).
['Charles Kervrann', 'Scott T. Acton', 'Jean-Christophe Olivo-Marin', 'Carlos Oscar Sánchez Sorzano', 'Michael Unser']
Introduction to the Issue on Advanced Signal Processing in Microscopy and Cell Imaging
644,760
CUT FOR CORE LOGIC
['Neil Tennant']
CUT FOR CORE LOGIC
703,896
In this study, the authors investigate the energy-aware quality of information (QoI) maximisation problem by jointly optimising the sensor selection, sampling rate, packet-dropped rate, and transmit power in wireless sensor networks. By introducing the weight parameters, the authors first present a revenue-cost (RC) function which combines the optimisation objectives of the QoI and energy expenditure into a single objective to capture the tradeoff between them. Then, a stochastic optimisation programming is formulated to maximise the long-term average RC value subject to the network stability constraint. Using the Lyapunov drift theory, the authors develop a collaborative sensing and transmit power control algorithm that can guarantee the worst-case delay for each packet. Simulation results demonstrate the advantages of the proposed algorithm.
['Pengfei Du', 'Qinghai Yang', 'Qingsu He', 'Kyung Sup Kwak']
Energy-aware quality of information maximisation for wireless sensor networks
852,893
Reactive routing protocols in Mobile Ad Hoc Networks (MANETs) such as Ad Hoc On-Demand Distance Vector (AODV) and Dynamic Source Routing (DSR) are vulnerable to sinkhole attack. Sinkhole attack is a route disruption attack. The malicious sinkhole nodes attempt to lure almost all the traffic by propagating false routing information and disrupting the normal operation of the network. The sinkhole nodes should be detected and isolated from the MANET as early as possible. Hence, a consistent, reliable and effective method of detecting sinkhole nodes is required. We have developed a method to detect the sinkhole nodes by observing the behaviour of the mobile nodes in the neighbourhood and isolating them from the MANET. The proposed method was simulated and the experimental analysis was carried out. The results show that the proposed method is consistent, reliable and effective in detecting and isolating the sinkhole nodes in MANET.
['Immanuel John Raja Jebadurai', 'Elijah Blessing Rajsingh', 'Getzi Jeba Leelipushpam Paulraj', 'Salaja Silas']
EDIS: an effective method for detection and isolation of sinkhole attacks in mobile ad hoc networks
957,392
Starting with the quest for the meanings of the tetralemma overlooked so far, we consider a logic of argumentation based on the tetralemma. It allows for Eastern arguments as well as Western arguments, promoting the fusion of Eastern and Western reasoning in argumentation.
['Hajime Sawamura', 'Edwin D. Mares']
Logic of Argumentation based on Tetralemma with an Eastern Mind
67,465
We have been developing a virtual surgery system that is capable of simulating surgical maneuvers on elastic organs. In order to perform such maneuvers, we have created a deformable organ model using a sphere-filled method instead of the finite element method. This model is suited for real-time simulation and quantitative deformation. Furthermore, we have equipped this model with a sense of touch and a sense of force by connecting it to a force feedback device. However, in the initial stage the model became problematic when faced with complicated incisions. Therefore, we modified this model by developing an algorithm for organ deformation that performs various, complicated incisions while taking into account the effect of gravity. As a result, the sphere-filled model allowed our system to respond to various incisions that deform the organ. Thus, various physical manipulations that involve pressing, pinching, or incising an organ's surface can be performed. Furthermore, the deformation of the internal organ structures and changes in organ vasculature can be observed via the internal spheres' behavior.
['Shigeyuki Suzuki', 'Naoki Suzuki', 'Asaki Hattori', 'Akihiko Uchiyama', 'Susumu Kobayashi']
Sphere-filled organ model for virtual surgery system
43,666